Test Report: Docker_macOS 15909

                    
                      468919b2fcd0c7cf0d4c8e9733c4c1a0b87a5208:2023-02-23:28038
                    
                

Test fail (16/306)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (268.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-234000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0223 14:09:23.924039   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:11:40.077777   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:12:06.016983   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.022744   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.033633   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.055823   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.096315   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.178483   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.340714   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:06.662893   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:07.304536   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:07.762856   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:12:08.586646   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:11.147078   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:16.267183   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:26.507449   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:12:46.988225   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:13:27.950069   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-234000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m28.850623547s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-234000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-234000 in cluster ingress-addon-legacy-234000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:09:00.086067   18216 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:09:00.086231   18216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:09:00.086236   18216 out.go:309] Setting ErrFile to fd 2...
	I0223 14:09:00.086239   18216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:09:00.086349   18216 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:09:00.087710   18216 out.go:303] Setting JSON to false
	I0223 14:09:00.106163   18216 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5915,"bootTime":1677184225,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:09:00.106252   18216 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:09:00.127714   18216 out.go:177] * [ingress-addon-legacy-234000] minikube v1.29.0 on Darwin 13.2
	I0223 14:09:00.170108   18216 notify.go:220] Checking for updates...
	I0223 14:09:00.191685   18216 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:09:00.213037   18216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:09:00.234757   18216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:09:00.255805   18216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:09:00.277043   18216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:09:00.298944   18216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:09:00.321144   18216 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:09:00.383102   18216 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:09:00.383247   18216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:09:00.524533   18216 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 22:09:00.432919381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:09:00.545949   18216 out.go:177] * Using the docker driver based on user configuration
	I0223 14:09:00.567823   18216 start.go:296] selected driver: docker
	I0223 14:09:00.567856   18216 start.go:857] validating driver "docker" against <nil>
	I0223 14:09:00.567875   18216 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:09:00.571851   18216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:09:00.712356   18216 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 22:09:00.620687221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:09:00.712513   18216 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 14:09:00.712690   18216 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 14:09:00.734109   18216 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 14:09:00.756264   18216 cni.go:84] Creating CNI manager for ""
	I0223 14:09:00.756302   18216 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 14:09:00.756318   18216 start_flags.go:319] config:
	{Name:ingress-addon-legacy-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-234000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:09:00.799882   18216 out.go:177] * Starting control plane node ingress-addon-legacy-234000 in cluster ingress-addon-legacy-234000
	I0223 14:09:00.821220   18216 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:09:00.843190   18216 out.go:177] * Pulling base image ...
	I0223 14:09:00.865091   18216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 14:09:00.865133   18216 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:09:00.920760   18216 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:09:00.920783   18216 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:09:00.977669   18216 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0223 14:09:00.977710   18216 cache.go:57] Caching tarball of preloaded images
	I0223 14:09:00.978136   18216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 14:09:01.000354   18216 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0223 14:09:01.021748   18216 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 14:09:01.238356   18216 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0223 14:09:18.597127   18216 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 14:09:18.597318   18216 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 14:09:19.221001   18216 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0223 14:09:19.221228   18216 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/config.json ...
	I0223 14:09:19.221255   18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/config.json: {Name:mk12bfdb3c9a368b15e2e757666b494b163760fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:09:19.221537   18216 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:09:19.221564   18216 start.go:364] acquiring machines lock for ingress-addon-legacy-234000: {Name:mk117825bbd4fd1d51609d1f587776a77771cdf0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:09:19.221695   18216 start.go:368] acquired machines lock for "ingress-addon-legacy-234000" in 123.523µs
	I0223 14:09:19.221720   18216 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-234000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 14:09:19.221763   18216 start.go:125] createHost starting for "" (driver="docker")
	I0223 14:09:19.266015   18216 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0223 14:09:19.266377   18216 start.go:159] libmachine.API.Create for "ingress-addon-legacy-234000" (driver="docker")
	I0223 14:09:19.266421   18216 client.go:168] LocalClient.Create starting
	I0223 14:09:19.266619   18216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem
	I0223 14:09:19.266701   18216 main.go:141] libmachine: Decoding PEM data...
	I0223 14:09:19.266735   18216 main.go:141] libmachine: Parsing certificate...
	I0223 14:09:19.266842   18216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem
	I0223 14:09:19.266912   18216 main.go:141] libmachine: Decoding PEM data...
	I0223 14:09:19.266930   18216 main.go:141] libmachine: Parsing certificate...
	I0223 14:09:19.267789   18216 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-234000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 14:09:19.326417   18216 cli_runner.go:211] docker network inspect ingress-addon-legacy-234000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 14:09:19.326536   18216 network_create.go:281] running [docker network inspect ingress-addon-legacy-234000] to gather additional debugging logs...
	I0223 14:09:19.326553   18216 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-234000
	W0223 14:09:19.382500   18216 cli_runner.go:211] docker network inspect ingress-addon-legacy-234000 returned with exit code 1
	I0223 14:09:19.382528   18216 network_create.go:284] error running [docker network inspect ingress-addon-legacy-234000]: docker network inspect ingress-addon-legacy-234000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-234000
	I0223 14:09:19.382541   18216 network_create.go:286] output of [docker network inspect ingress-addon-legacy-234000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-234000
	
	** /stderr **
	I0223 14:09:19.382634   18216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 14:09:19.436845   18216 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00136ff60}
	I0223 14:09:19.436887   18216 network_create.go:123] attempt to create docker network ingress-addon-legacy-234000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0223 14:09:19.436963   18216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-234000 ingress-addon-legacy-234000
	I0223 14:09:19.525353   18216 network_create.go:107] docker network ingress-addon-legacy-234000 192.168.49.0/24 created
	I0223 14:09:19.525407   18216 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-234000" container
	I0223 14:09:19.525536   18216 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 14:09:19.583501   18216 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-234000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-234000 --label created_by.minikube.sigs.k8s.io=true
	I0223 14:09:19.638514   18216 oci.go:103] Successfully created a docker volume ingress-addon-legacy-234000
	I0223 14:09:19.638663   18216 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-234000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-234000 --entrypoint /usr/bin/test -v ingress-addon-legacy-234000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 14:09:20.061403   18216 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-234000
	I0223 14:09:20.061450   18216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 14:09:20.061464   18216 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 14:09:20.061589   18216 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-234000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 14:09:26.015735   18216 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-234000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (5.954098735s)
	I0223 14:09:26.015758   18216 kic.go:199] duration metric: took 5.954346 seconds to extract preloaded images to volume
	I0223 14:09:26.015873   18216 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 14:09:26.164146   18216 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-234000 --name ingress-addon-legacy-234000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-234000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-234000 --network ingress-addon-legacy-234000 --ip 192.168.49.2 --volume ingress-addon-legacy-234000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 14:09:26.511842   18216 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Running}}
	I0223 14:09:26.571082   18216 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Status}}
	I0223 14:09:26.634126   18216 cli_runner.go:164] Run: docker exec ingress-addon-legacy-234000 stat /var/lib/dpkg/alternatives/iptables
	I0223 14:09:26.738389   18216 oci.go:144] the created container "ingress-addon-legacy-234000" has a running status.
	I0223 14:09:26.738430   18216 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa...
	I0223 14:09:26.883872   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 14:09:26.883938   18216 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 14:09:27.054106   18216 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Status}}
	I0223 14:09:27.112854   18216 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 14:09:27.112883   18216 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-234000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 14:09:27.213763   18216 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Status}}
	I0223 14:09:27.270441   18216 machine.go:88] provisioning docker machine ...
	I0223 14:09:27.270486   18216 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-234000"
	I0223 14:09:27.270599   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:27.327331   18216 main.go:141] libmachine: Using SSH client type: native
	I0223 14:09:27.327724   18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58153 <nil> <nil>}
	I0223 14:09:27.327740   18216 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-234000 && echo "ingress-addon-legacy-234000" | sudo tee /etc/hostname
	I0223 14:09:27.470593   18216 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-234000
	
	I0223 14:09:27.470676   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:27.529213   18216 main.go:141] libmachine: Using SSH client type: native
	I0223 14:09:27.529576   18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58153 <nil> <nil>}
	I0223 14:09:27.529593   18216 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-234000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-234000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-234000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:09:27.663564   18216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:09:27.663588   18216 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:09:27.663615   18216 ubuntu.go:177] setting up certificates
	I0223 14:09:27.663627   18216 provision.go:83] configureAuth start
	I0223 14:09:27.663701   18216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-234000
	I0223 14:09:27.720189   18216 provision.go:138] copyHostCerts
	I0223 14:09:27.720235   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:09:27.720298   18216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:09:27.720307   18216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:09:27.720414   18216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:09:27.720572   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:09:27.720606   18216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:09:27.720610   18216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:09:27.720684   18216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:09:27.720827   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:09:27.720863   18216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:09:27.720867   18216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:09:27.720928   18216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:09:27.721048   18216 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-234000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-234000]
	I0223 14:09:27.986410   18216 provision.go:172] copyRemoteCerts
	I0223 14:09:27.986479   18216 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:09:27.986538   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:28.044006   18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
	I0223 14:09:28.138890   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 14:09:28.138983   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:09:28.156506   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 14:09:28.156588   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0223 14:09:28.173443   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 14:09:28.173532   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 14:09:28.191197   18216 provision.go:86] duration metric: configureAuth took 527.557432ms
	I0223 14:09:28.191217   18216 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:09:28.191375   18216 config.go:182] Loaded profile config "ingress-addon-legacy-234000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 14:09:28.191437   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:28.248911   18216 main.go:141] libmachine: Using SSH client type: native
	I0223 14:09:28.249261   18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58153 <nil> <nil>}
	I0223 14:09:28.249279   18216 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:09:28.385137   18216 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:09:28.385156   18216 ubuntu.go:71] root file system type: overlay
	I0223 14:09:28.385274   18216 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:09:28.385363   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:28.441731   18216 main.go:141] libmachine: Using SSH client type: native
	I0223 14:09:28.442089   18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58153 <nil> <nil>}
	I0223 14:09:28.442139   18216 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:09:28.585791   18216 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:09:28.585901   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:28.643223   18216 main.go:141] libmachine: Using SSH client type: native
	I0223 14:09:28.643581   18216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58153 <nil> <nil>}
	I0223 14:09:28.643596   18216 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:09:29.259206   18216 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:09:28.583009950 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 14:09:29.259235   18216 machine.go:91] provisioned docker machine in 1.988789802s
	I0223 14:09:29.259241   18216 client.go:171] LocalClient.Create took 9.992902055s
	I0223 14:09:29.259258   18216 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-234000" took 9.992972968s
	I0223 14:09:29.259269   18216 start.go:300] post-start starting for "ingress-addon-legacy-234000" (driver="docker")
	I0223 14:09:29.259276   18216 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:09:29.259368   18216 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:09:29.259421   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:29.317545   18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
	I0223 14:09:29.412529   18216 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:09:29.416046   18216 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:09:29.416067   18216 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:09:29.416074   18216 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:09:29.416079   18216 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:09:29.416089   18216 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:09:29.416188   18216 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:09:29.416366   18216 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:09:29.416372   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /etc/ssl/certs/152102.pem
	I0223 14:09:29.416566   18216 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:09:29.423772   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:09:29.440907   18216 start.go:303] post-start completed in 181.630762ms
	I0223 14:09:29.441476   18216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-234000
	I0223 14:09:29.498283   18216 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/config.json ...
	I0223 14:09:29.498709   18216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:09:29.498772   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:29.556983   18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
	I0223 14:09:29.648368   18216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:09:29.653138   18216 start.go:128] duration metric: createHost completed in 10.431458333s
	I0223 14:09:29.653158   18216 start.go:83] releasing machines lock for "ingress-addon-legacy-234000", held for 10.431548307s
	I0223 14:09:29.653280   18216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-234000
	I0223 14:09:29.710445   18216 ssh_runner.go:195] Run: cat /version.json
	I0223 14:09:29.710488   18216 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 14:09:29.710523   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:29.710560   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:29.769656   18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
	I0223 14:09:29.770191   18216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
	I0223 14:09:30.119861   18216 ssh_runner.go:195] Run: systemctl --version
	I0223 14:09:30.124449   18216 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 14:09:30.129431   18216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 14:09:30.148809   18216 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 14:09:30.148899   18216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 14:09:30.162362   18216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 14:09:30.169760   18216 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 14:09:30.169777   18216 start.go:485] detecting cgroup driver to use...
	I0223 14:09:30.169788   18216 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:09:30.169876   18216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:09:30.182674   18216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0223 14:09:30.190880   18216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:09:30.199225   18216 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:09:30.199284   18216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:09:30.207631   18216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:09:30.215731   18216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:09:30.223815   18216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:09:30.232151   18216 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:09:30.239860   18216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:09:30.248162   18216 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:09:30.255420   18216 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:09:30.262379   18216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:09:30.329271   18216 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:09:30.396351   18216 start.go:485] detecting cgroup driver to use...
	I0223 14:09:30.396372   18216 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:09:30.396447   18216 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:09:30.407153   18216 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:09:30.407229   18216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:09:30.417092   18216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:09:30.430752   18216 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:09:30.521747   18216 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:09:30.612712   18216 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:09:30.612734   18216 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:09:30.625778   18216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:09:30.716713   18216 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:09:30.933212   18216 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:09:30.958050   18216 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:09:31.005379   18216 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	I0223 14:09:31.005622   18216 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-234000 dig +short host.docker.internal
	I0223 14:09:31.117465   18216 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:09:31.117576   18216 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:09:31.121874   18216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:09:31.131799   18216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:09:31.187385   18216 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 14:09:31.187470   18216 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:09:31.207365   18216 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0223 14:09:31.207383   18216 docker.go:560] Images already preloaded, skipping extraction
	I0223 14:09:31.207483   18216 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:09:31.227312   18216 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0223 14:09:31.227328   18216 cache_images.go:84] Images are preloaded, skipping loading
	I0223 14:09:31.227422   18216 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:09:31.252625   18216 cni.go:84] Creating CNI manager for ""
	I0223 14:09:31.252643   18216 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 14:09:31.252659   18216 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:09:31.252678   18216 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-234000 NodeName:ingress-addon-legacy-234000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:09:31.252784   18216 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-234000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:09:31.252897   18216 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-234000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-234000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:09:31.252971   18216 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0223 14:09:31.260769   18216 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:09:31.260833   18216 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 14:09:31.267984   18216 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0223 14:09:31.280395   18216 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0223 14:09:31.292889   18216 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0223 14:09:31.305695   18216 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:09:31.309546   18216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:09:31.319124   18216 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000 for IP: 192.168.49.2
	I0223 14:09:31.319142   18216 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:09:31.319314   18216 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:09:31.319377   18216 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:09:31.319428   18216 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.key
	I0223 14:09:31.319440   18216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.crt with IP's: []
	I0223 14:09:31.402212   18216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.crt ...
	I0223 14:09:31.402221   18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.crt: {Name:mka83784595163acae28f8a405113a29c8ea9c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:09:31.402498   18216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.key ...
	I0223 14:09:31.402521   18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/client.key: {Name:mk96fcfd95bf7721cd99c441f54df0de6313ebb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:09:31.402705   18216 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key.dd3b5fb2
	I0223 14:09:31.402719   18216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 14:09:31.488818   18216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt.dd3b5fb2 ...
	I0223 14:09:31.488827   18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt.dd3b5fb2: {Name:mk21bd644e91e2d025473b2665c4f1ebf6259523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:09:31.489047   18216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key.dd3b5fb2 ...
	I0223 14:09:31.489054   18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key.dd3b5fb2: {Name:mk6223f0b125b2b52d35b702c877e6102f293e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:09:31.489231   18216 certs.go:333] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt
	I0223 14:09:31.489492   18216 certs.go:337] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key
	I0223 14:09:31.489671   18216 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key
	I0223 14:09:31.489690   18216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt with IP's: []
	I0223 14:09:31.631795   18216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt ...
	I0223 14:09:31.631805   18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt: {Name:mk5562f0ddb7e97b10f2f26074b304376416df09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:09:31.632047   18216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key ...
	I0223 14:09:31.632056   18216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key: {Name:mk10285d1a3bc8975016c7e39267005300abacce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:09:31.632256   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 14:09:31.632285   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 14:09:31.632305   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 14:09:31.632328   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 14:09:31.632349   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 14:09:31.632371   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 14:09:31.632391   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 14:09:31.632409   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 14:09:31.632506   18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:09:31.632552   18216 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:09:31.632562   18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:09:31.632594   18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:09:31.632643   18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:09:31.632678   18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:09:31.632742   18216 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:09:31.632781   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:09:31.632802   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem -> /usr/share/ca-certificates/15210.pem
	I0223 14:09:31.632821   18216 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /usr/share/ca-certificates/152102.pem
	I0223 14:09:31.633327   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 14:09:31.651252   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 14:09:31.668174   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 14:09:31.685086   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/ingress-addon-legacy-234000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 14:09:31.702016   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:09:31.718811   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:09:31.735782   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:09:31.752805   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:09:31.769539   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:09:31.787007   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:09:31.803958   18216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:09:31.820891   18216 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 14:09:31.833749   18216 ssh_runner.go:195] Run: openssl version
	I0223 14:09:31.839234   18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:09:31.847158   18216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:09:31.851010   18216 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:09:31.851056   18216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:09:31.856279   18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:09:31.864157   18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:09:31.872078   18216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:09:31.875976   18216 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:09:31.876029   18216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:09:31.881174   18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:09:31.889302   18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:09:31.897170   18216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:09:31.901395   18216 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:09:31.901447   18216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:09:31.906745   18216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:09:31.914660   18216 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-234000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:09:31.914766   18216 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:09:31.933269   18216 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 14:09:31.940931   18216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 14:09:31.948092   18216 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 14:09:31.948158   18216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:09:31.955351   18216 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:09:31.955375   18216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 14:09:32.002658   18216 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0223 14:09:32.002732   18216 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 14:09:32.168079   18216 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:09:32.168209   18216 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:09:32.168295   18216 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:09:32.319675   18216 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:09:32.320147   18216 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:09:32.320188   18216 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 14:09:32.397172   18216 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:09:32.438604   18216 out.go:204]   - Generating certificates and keys ...
	I0223 14:09:32.438726   18216 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 14:09:32.438813   18216 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 14:09:32.535847   18216 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 14:09:32.700723   18216 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 14:09:32.802484   18216 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 14:09:33.043420   18216 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 14:09:33.138958   18216 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 14:09:33.139091   18216 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0223 14:09:33.265888   18216 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 14:09:33.266013   18216 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0223 14:09:33.338315   18216 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 14:09:33.658707   18216 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 14:09:33.836892   18216 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 14:09:33.836934   18216 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:09:33.958489   18216 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:09:34.149348   18216 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:09:34.530822   18216 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:09:34.791992   18216 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:09:34.792675   18216 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:09:34.834960   18216 out.go:204]   - Booting up control plane ...
	I0223 14:09:34.835085   18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:09:34.835161   18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:09:34.835281   18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:09:34.835374   18216 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:09:34.835502   18216 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:10:14.801852   18216 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 14:10:14.802543   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:10:14.802796   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:10:19.803192   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:10:19.803343   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:10:29.805207   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:10:29.805440   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:10:49.805481   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:10:49.805672   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:11:29.806183   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:11:29.806356   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:11:29.806370   18216 kubeadm.go:322] 
	I0223 14:11:29.806400   18216 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0223 14:11:29.806447   18216 kubeadm.go:322] 		timed out waiting for the condition
	I0223 14:11:29.806464   18216 kubeadm.go:322] 
	I0223 14:11:29.806516   18216 kubeadm.go:322] 	This error is likely caused by:
	I0223 14:11:29.806559   18216 kubeadm.go:322] 		- The kubelet is not running
	I0223 14:11:29.806675   18216 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 14:11:29.806687   18216 kubeadm.go:322] 
	I0223 14:11:29.806771   18216 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 14:11:29.806818   18216 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0223 14:11:29.806851   18216 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0223 14:11:29.806857   18216 kubeadm.go:322] 
	I0223 14:11:29.806963   18216 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 14:11:29.807022   18216 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0223 14:11:29.807029   18216 kubeadm.go:322] 
	I0223 14:11:29.807101   18216 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0223 14:11:29.807158   18216 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0223 14:11:29.807228   18216 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0223 14:11:29.807257   18216 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0223 14:11:29.807263   18216 kubeadm.go:322] 
	I0223 14:11:29.809880   18216 kubeadm.go:322] W0223 22:09:32.002014    1159 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0223 14:11:29.810023   18216 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 14:11:29.810076   18216 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 14:11:29.810200   18216 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0223 14:11:29.810293   18216 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:11:29.810400   18216 kubeadm.go:322] W0223 22:09:34.797029    1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 14:11:29.810506   18216 kubeadm.go:322] W0223 22:09:34.797829    1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 14:11:29.810582   18216 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 14:11:29.810640   18216 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 14:11:29.810856   18216 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 22:09:32.002014    1159 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 22:09:34.797029    1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 22:09:34.797829    1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-234000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 22:09:32.002014    1159 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 22:09:34.797029    1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 22:09:34.797829    1159 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 14:11:29.810888   18216 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 14:11:30.233889   18216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:11:30.243645   18216 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 14:11:30.243706   18216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:11:30.251214   18216 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:11:30.251256   18216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 14:11:30.298834   18216 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0223 14:11:30.298886   18216 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 14:11:30.459544   18216 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:11:30.459643   18216 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:11:30.459732   18216 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:11:30.611028   18216 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:11:30.611566   18216 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:11:30.611628   18216 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 14:11:30.681966   18216 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:11:30.724116   18216 out.go:204]   - Generating certificates and keys ...
	I0223 14:11:30.724230   18216 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 14:11:30.724303   18216 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 14:11:30.724372   18216 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 14:11:30.724434   18216 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 14:11:30.724506   18216 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 14:11:30.724558   18216 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 14:11:30.724635   18216 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 14:11:30.724692   18216 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 14:11:30.724745   18216 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 14:11:30.724816   18216 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 14:11:30.724846   18216 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 14:11:30.724923   18216 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:11:30.843315   18216 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:11:30.941283   18216 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:11:31.141792   18216 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:11:31.304279   18216 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:11:31.304990   18216 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:11:31.326699   18216 out.go:204]   - Booting up control plane ...
	I0223 14:11:31.326888   18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:11:31.327020   18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:11:31.327146   18216 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:11:31.327298   18216 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:11:31.327573   18216 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:12:11.313386   18216 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 14:12:11.314071   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:12:11.314351   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:12:16.314649   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:12:16.314814   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:12:26.316845   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:12:26.317100   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:12:46.317291   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:12:46.317512   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:13:26.319085   18216 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:13:26.319333   18216 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:13:26.319345   18216 kubeadm.go:322] 
	I0223 14:13:26.319432   18216 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0223 14:13:26.319492   18216 kubeadm.go:322] 		timed out waiting for the condition
	I0223 14:13:26.319506   18216 kubeadm.go:322] 
	I0223 14:13:26.319553   18216 kubeadm.go:322] 	This error is likely caused by:
	I0223 14:13:26.319617   18216 kubeadm.go:322] 		- The kubelet is not running
	I0223 14:13:26.319811   18216 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 14:13:26.319825   18216 kubeadm.go:322] 
	I0223 14:13:26.319939   18216 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 14:13:26.319983   18216 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0223 14:13:26.320029   18216 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0223 14:13:26.320043   18216 kubeadm.go:322] 
	I0223 14:13:26.320193   18216 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 14:13:26.320293   18216 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0223 14:13:26.320308   18216 kubeadm.go:322] 
	I0223 14:13:26.320436   18216 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0223 14:13:26.320492   18216 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0223 14:13:26.320561   18216 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0223 14:13:26.320606   18216 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0223 14:13:26.320616   18216 kubeadm.go:322] 
	I0223 14:13:26.323451   18216 kubeadm.go:322] W0223 22:11:30.298075    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0223 14:13:26.323591   18216 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 14:13:26.323676   18216 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 14:13:26.323785   18216 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0223 14:13:26.323875   18216 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:13:26.323973   18216 kubeadm.go:322] W0223 22:11:31.309096    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 14:13:26.324087   18216 kubeadm.go:322] W0223 22:11:31.309794    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 14:13:26.324165   18216 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 14:13:26.324233   18216 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 14:13:26.324245   18216 kubeadm.go:403] StartCluster complete in 3m54.411659639s
	I0223 14:13:26.324334   18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 14:13:26.342685   18216 logs.go:277] 0 containers: []
	W0223 14:13:26.342698   18216 logs.go:279] No container was found matching "kube-apiserver"
	I0223 14:13:26.342776   18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 14:13:26.361812   18216 logs.go:277] 0 containers: []
	W0223 14:13:26.361825   18216 logs.go:279] No container was found matching "etcd"
	I0223 14:13:26.361898   18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 14:13:26.380843   18216 logs.go:277] 0 containers: []
	W0223 14:13:26.380855   18216 logs.go:279] No container was found matching "coredns"
	I0223 14:13:26.380920   18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 14:13:26.399464   18216 logs.go:277] 0 containers: []
	W0223 14:13:26.399481   18216 logs.go:279] No container was found matching "kube-scheduler"
	I0223 14:13:26.399546   18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 14:13:26.419477   18216 logs.go:277] 0 containers: []
	W0223 14:13:26.419490   18216 logs.go:279] No container was found matching "kube-proxy"
	I0223 14:13:26.419564   18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 14:13:26.438713   18216 logs.go:277] 0 containers: []
	W0223 14:13:26.438728   18216 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 14:13:26.438808   18216 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 14:13:26.457712   18216 logs.go:277] 0 containers: []
	W0223 14:13:26.457727   18216 logs.go:279] No container was found matching "kindnet"
	I0223 14:13:26.457734   18216 logs.go:123] Gathering logs for kubelet ...
	I0223 14:13:26.457742   18216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 14:13:26.495670   18216 logs.go:123] Gathering logs for dmesg ...
	I0223 14:13:26.495684   18216 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 14:13:26.508002   18216 logs.go:123] Gathering logs for describe nodes ...
	I0223 14:13:26.508018   18216 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 14:13:26.560771   18216 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 14:13:26.560782   18216 logs.go:123] Gathering logs for Docker ...
	I0223 14:13:26.560789   18216 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 14:13:26.585074   18216 logs.go:123] Gathering logs for container status ...
	I0223 14:13:26.585089   18216 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 14:13:28.632743   18216 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047660062s)
	W0223 14:13:28.632872   18216 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 22:11:30.298075    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 22:11:31.309096    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 22:11:31.309794    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 14:13:28.632892   18216 out.go:239] * 
	* 
	W0223 14:13:28.633015   18216 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 22:11:30.298075    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 22:11:31.309096    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 22:11:31.309794    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 22:11:30.298075    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 22:11:31.309096    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 22:11:31.309794    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 14:13:28.633031   18216 out.go:239] * 
	* 
	W0223 14:13:28.633678   18216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 14:13:28.696517   18216 out.go:177] 
	W0223 14:13:28.760593   18216 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 22:11:30.298075    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 22:11:31.309096    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 22:11:31.309794    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 22:11:30.298075    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 22:11:31.309096    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 22:11:31.309794    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 14:13:28.760728   18216 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 14:13:28.760831   18216 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 14:13:28.802524   18216 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-234000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (268.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (115.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-234000 addons enable ingress --alsologtostderr -v=5
E0223 14:14:49.870317   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-234000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m55.184105762s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:13:28.970239   18599 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:13:28.970733   18599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:13:28.970739   18599 out.go:309] Setting ErrFile to fd 2...
	I0223 14:13:28.970743   18599 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:13:28.970857   18599 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:13:28.992330   18599 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0223 14:13:29.013416   18599 config.go:182] Loaded profile config "ingress-addon-legacy-234000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 14:13:29.013437   18599 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-234000"
	I0223 14:13:29.013448   18599 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-234000"
	I0223 14:13:29.013739   18599 host.go:66] Checking if "ingress-addon-legacy-234000" exists ...
	I0223 14:13:29.014259   18599 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Status}}
	I0223 14:13:29.095604   18599 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0223 14:13:29.116730   18599 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0223 14:13:29.138220   18599 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0223 14:13:29.159331   18599 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0223 14:13:29.180451   18599 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0223 14:13:29.180475   18599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0223 14:13:29.180563   18599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:13:29.237456   18599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
	I0223 14:13:29.336269   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:29.388016   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:29.388056   18599 retry.go:31] will retry after 343.817586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:29.732049   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:29.783628   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:29.783646   18599 retry.go:31] will retry after 368.154649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:30.154079   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:30.208646   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:30.208671   18599 retry.go:31] will retry after 705.388109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:30.914364   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:30.966992   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:30.967008   18599 retry.go:31] will retry after 695.373541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:31.663587   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:31.716666   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:31.716681   18599 retry.go:31] will retry after 904.916098ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:32.621891   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:32.673363   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:32.673384   18599 retry.go:31] will retry after 1.133845983s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:33.808146   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:33.862446   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:33.862462   18599 retry.go:31] will retry after 1.734202984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:35.597736   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:35.652348   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:35.652364   18599 retry.go:31] will retry after 3.365033484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:39.018086   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:39.070682   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:39.070698   18599 retry.go:31] will retry after 6.791160647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:45.864064   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:45.917315   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:45.917330   18599 retry.go:31] will retry after 7.829610828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:53.747182   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:13:53.799500   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:13:53.799521   18599 retry.go:31] will retry after 17.657176019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:14:11.458921   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:14:11.513338   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:14:11.513356   18599 retry.go:31] will retry after 19.941641878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:14:31.457221   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:14:31.511964   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:14:31.511979   18599 retry.go:31] will retry after 27.683792887s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:14:59.197911   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:14:59.250769   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:14:59.250785   18599 retry.go:31] will retry after 24.675437964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:23.926560   18599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 14:15:23.979939   18599 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:23.991695   18599 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-234000"
	I0223 14:15:24.013375   18599 out.go:177] * Verifying ingress addon...
	I0223 14:15:24.036757   18599 out.go:177] 
	W0223 14:15:24.058600   18599 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-234000" does not exist: client config: context "ingress-addon-legacy-234000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-234000" does not exist: client config: context "ingress-addon-legacy-234000" does not exist]
	W0223 14:15:24.058630   18599 out.go:239] * 
	* 
	W0223 14:15:24.062805   18599 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 14:15:24.084261   18599 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-234000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-234000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54",
	        "Created": "2023-02-23T22:09:26.217717235Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 47925,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:09:26.502566465Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/hostname",
	        "HostsPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/hosts",
	        "LogPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54-json.log",
	        "Name": "/ingress-addon-legacy-234000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-234000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-234000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-234000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-234000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-234000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-234000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-234000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98b61559ee3c5022c0c5642480e0b51a579a8b1785f3c7998594a53007432b20",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58153"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58154"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58155"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58156"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58157"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/98b61559ee3c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-234000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c665daa3e0a9",
	                        "ingress-addon-legacy-234000"
	                    ],
	                    "NetworkID": "0a7c91d12514b8171a935fe39e1c2a6f15c46c964737dfc6db3e7a13a2293909",
	                    "EndpointID": "93ecbb9f65fece8dd26b94ba1b1812ecefdb9b1e13990569557fe109679835ed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-234000 -n ingress-addon-legacy-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-234000 -n ingress-addon-legacy-234000: exit status 6 (392.980803ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:15:24.551217   18728 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-234000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-234000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (115.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (103.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-234000 addons enable ingress-dns --alsologtostderr -v=5
E0223 14:16:40.075009   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:17:06.016199   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-234000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m42.944161441s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:15:24.607428   18740 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:15:24.608123   18740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:15:24.608129   18740 out.go:309] Setting ErrFile to fd 2...
	I0223 14:15:24.608133   18740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:15:24.608242   18740 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:15:24.630532   18740 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0223 14:15:24.652833   18740 config.go:182] Loaded profile config "ingress-addon-legacy-234000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 14:15:24.652871   18740 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-234000"
	I0223 14:15:24.652921   18740 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-234000"
	I0223 14:15:24.653436   18740 host.go:66] Checking if "ingress-addon-legacy-234000" exists ...
	I0223 14:15:24.654382   18740 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-234000 --format={{.State.Status}}
	I0223 14:15:24.734052   18740 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0223 14:15:24.756125   18740 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0223 14:15:24.777572   18740 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0223 14:15:24.777602   18740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0223 14:15:24.777723   18740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-234000
	I0223 14:15:24.834255   18740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58153 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/ingress-addon-legacy-234000/id_rsa Username:docker}
	I0223 14:15:24.934086   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:24.985090   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:24.985133   18740 retry.go:31] will retry after 280.50162ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:25.265901   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:25.317647   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:25.317663   18740 retry.go:31] will retry after 257.3964ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:25.577296   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:25.634011   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:25.634031   18740 retry.go:31] will retry after 351.302714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:25.985468   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:26.037042   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:26.037058   18740 retry.go:31] will retry after 931.753564ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:26.969735   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:27.024517   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:27.024538   18740 retry.go:31] will retry after 1.003829167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:28.029280   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:28.081547   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:28.081562   18740 retry.go:31] will retry after 2.059909544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:30.142765   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:30.197364   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:30.197380   18740 retry.go:31] will retry after 3.071046273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:33.269357   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:33.323877   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:33.323895   18740 retry.go:31] will retry after 6.248917455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:39.573286   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:39.625470   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:39.629813   18740 retry.go:31] will retry after 4.522580304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:44.153329   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:44.206191   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:44.206207   18740 retry.go:31] will retry after 5.775626431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:49.981940   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:15:50.033924   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:15:50.033939   18740 retry.go:31] will retry after 12.968137139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:16:03.003582   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:16:03.057595   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:16:03.057610   18740 retry.go:31] will retry after 19.338061213s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:16:22.397222   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:16:22.449842   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:16:22.449858   18740 retry.go:31] will retry after 44.907485342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:17:07.357808   18740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 14:17:07.410408   18740 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 14:17:07.432447   18740 out.go:177] 
	W0223 14:17:07.454262   18740 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0223 14:17:07.454287   18740 out.go:239] * 
	* 
	W0223 14:17:07.459971   18740 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 14:17:07.481132   18740 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-234000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-234000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54",
	        "Created": "2023-02-23T22:09:26.217717235Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 47925,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:09:26.502566465Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/hostname",
	        "HostsPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/hosts",
	        "LogPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54-json.log",
	        "Name": "/ingress-addon-legacy-234000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-234000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-234000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-234000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-234000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-234000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-234000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-234000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98b61559ee3c5022c0c5642480e0b51a579a8b1785f3c7998594a53007432b20",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58153"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58154"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58155"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58156"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58157"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/98b61559ee3c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-234000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c665daa3e0a9",
	                        "ingress-addon-legacy-234000"
	                    ],
	                    "NetworkID": "0a7c91d12514b8171a935fe39e1c2a6f15c46c964737dfc6db3e7a13a2293909",
	                    "EndpointID": "93ecbb9f65fece8dd26b94ba1b1812ecefdb9b1e13990569557fe109679835ed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-234000 -n ingress-addon-legacy-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-234000 -n ingress-addon-legacy-234000: exit status 6 (390.769514ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:17:07.944981   18847 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-234000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-234000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (103.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-234000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-234000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54",
	        "Created": "2023-02-23T22:09:26.217717235Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 47925,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:09:26.502566465Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/hostname",
	        "HostsPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/hosts",
	        "LogPath": "/var/lib/docker/containers/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54/c665daa3e0a991c5e3f94932a2ed8293664973bcd734ed661b2f686a0d60ba54-json.log",
	        "Name": "/ingress-addon-legacy-234000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-234000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-234000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f5cef28e94d98f89b24833703dd2fc8d01ab8f09883e333d3ac111acd296566f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-234000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-234000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-234000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-234000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-234000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98b61559ee3c5022c0c5642480e0b51a579a8b1785f3c7998594a53007432b20",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58153"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58154"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58155"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58156"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58157"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/98b61559ee3c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-234000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c665daa3e0a9",
	                        "ingress-addon-legacy-234000"
	                    ],
	                    "NetworkID": "0a7c91d12514b8171a935fe39e1c2a6f15c46c964737dfc6db3e7a13a2293909",
	                    "EndpointID": "93ecbb9f65fece8dd26b94ba1b1812ecefdb9b1e13990569557fe109679835ed",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-234000 -n ingress-addon-legacy-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-234000 -n ingress-addon-legacy-234000: exit status 6 (391.86939ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:17:08.395138   18859 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-234000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-234000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-359000 -- rollout status deployment/busybox: (3.763727845s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1, output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:503: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- nslookup kubernetes.io: exit status 1 (158.736449ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-9zw45 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-ghfsb -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- nslookup kubernetes.default: exit status 1 (151.616838ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-9zw45 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-ghfsb -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (158.257951ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-9zw45 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-ghfsb -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-359000
helpers_test.go:235: (dbg) docker inspect multinode-359000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4",
	        "Created": "2023-02-23T22:22:25.825690898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 92023,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:22:26.110802915Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4/hosts",
	        "LogPath": "/var/lib/docker/containers/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4-json.log",
	        "Name": "/multinode-359000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-359000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-359000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b37c88dee6e8f718050d9c3a882d1a738a21392f3c214bc2ae682d08a8c774bd-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b37c88dee6e8f718050d9c3a882d1a738a21392f3c214bc2ae682d08a8c774bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b37c88dee6e8f718050d9c3a882d1a738a21392f3c214bc2ae682d08a8c774bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b37c88dee6e8f718050d9c3a882d1a738a21392f3c214bc2ae682d08a8c774bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-359000",
	                "Source": "/var/lib/docker/volumes/multinode-359000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-359000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-359000",
	                "name.minikube.sigs.k8s.io": "multinode-359000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a41ba8123e07116bee7f51c22243e8946c5457cdfd3d10fa3a4cddc3a333965",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58730"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58731"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58732"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58733"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58734"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5a41ba8123e0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-359000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "35fabfa71c4d",
	                        "multinode-359000"
	                    ],
	                    "NetworkID": "eb5aa03044a362392a7a3116bd1898165c0320685f48ef9fd4102df2baf38b21",
	                    "EndpointID": "229a2d32b60fa9d4b1b657244e8662288ca0eb664054d671cb85c2a7d04688ad",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-359000 -n multinode-359000
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-359000 logs -n 25: (2.460708719s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-015000                                  | second-015000        | jenkins | v1.29.0 | 23 Feb 23 14:21 PST | 23 Feb 23 14:21 PST |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| delete  | -p second-015000                                  | second-015000        | jenkins | v1.29.0 | 23 Feb 23 14:21 PST | 23 Feb 23 14:21 PST |
	| delete  | -p first-013000                                   | first-013000         | jenkins | v1.29.0 | 23 Feb 23 14:21 PST | 23 Feb 23 14:21 PST |
	| start   | -p mount-start-1-990000                           | mount-start-1-990000 | jenkins | v1.29.0 | 23 Feb 23 14:21 PST | 23 Feb 23 14:21 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-1-990000 ssh -- ls                    | mount-start-1-990000 | jenkins | v1.29.0 | 23 Feb 23 14:21 PST | 23 Feb 23 14:21 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-004000                           | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:21 PST | 23 Feb 23 14:22 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-004000 ssh -- ls                    | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-990000                           | mount-start-1-990000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-004000 ssh -- ls                    | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-004000                           | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	| start   | -p mount-start-2-004000                           | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	| ssh     | mount-start-2-004000 ssh -- ls                    | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-004000                           | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	| delete  | -p mount-start-1-990000                           | mount-start-1-990000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	| start   | -p multinode-359000                               | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:23 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- apply -f                   | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- rollout                    | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- get pods -o                | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- get pods -o                | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST |                     |
	|         | busybox-6b86dd6d48-9zw45 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-ghfsb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST |                     |
	|         | busybox-6b86dd6d48-9zw45 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-ghfsb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST |                     |
	|         | busybox-6b86dd6d48-9zw45 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-ghfsb -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 14:22:17
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 14:22:17.997568   20778 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:22:17.997723   20778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:22:17.997728   20778 out.go:309] Setting ErrFile to fd 2...
	I0223 14:22:17.997732   20778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:22:17.997857   20778 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:22:17.999185   20778 out.go:303] Setting JSON to false
	I0223 14:22:18.017507   20778 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6713,"bootTime":1677184225,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:22:18.017590   20778 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:22:18.039653   20778 out.go:177] * [multinode-359000] minikube v1.29.0 on Darwin 13.2
	I0223 14:22:18.083897   20778 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:22:18.083893   20778 notify.go:220] Checking for updates...
	I0223 14:22:18.105726   20778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:18.127870   20778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:22:18.149849   20778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:22:18.171667   20778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:22:18.192824   20778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:22:18.215094   20778 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:22:18.276848   20778 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:22:18.276960   20778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:22:18.420106   20778 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 22:22:18.326028873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:22:18.462050   20778 out.go:177] * Using the docker driver based on user configuration
	I0223 14:22:18.483031   20778 start.go:296] selected driver: docker
	I0223 14:22:18.483049   20778 start.go:857] validating driver "docker" against <nil>
	I0223 14:22:18.483059   20778 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:22:18.485614   20778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:22:18.627078   20778 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 22:22:18.53469381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:22:18.627216   20778 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 14:22:18.627401   20778 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 14:22:18.648439   20778 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 14:22:18.669475   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:22:18.669503   20778 cni.go:136] 0 nodes found, recommending kindnet
	I0223 14:22:18.669514   20778 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 14:22:18.669536   20778 start_flags.go:319] config:
	{Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:22:18.713150   20778 out.go:177] * Starting control plane node multinode-359000 in cluster multinode-359000
	I0223 14:22:18.734528   20778 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:22:18.756488   20778 out.go:177] * Pulling base image ...
	I0223 14:22:18.799601   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:22:18.799662   20778 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:22:18.799701   20778 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 14:22:18.799721   20778 cache.go:57] Caching tarball of preloaded images
	I0223 14:22:18.799932   20778 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 14:22:18.799951   20778 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 14:22:18.802483   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:22:18.802534   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json: {Name:mk48cc9f4da0284d12aeeaf021c24cd89028c83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:18.855090   20778 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:22:18.855109   20778 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:22:18.855128   20778 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:22:18.855168   20778 start.go:364] acquiring machines lock for multinode-359000: {Name:mk4618dcf142341b2bdb2e619b88566b84020269 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:22:18.855322   20778 start.go:368] acquired machines lock for "multinode-359000" in 141.911µs
	I0223 14:22:18.855365   20778 start.go:93] Provisioning new machine with config: &{Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 14:22:18.855415   20778 start.go:125] createHost starting for "" (driver="docker")
	I0223 14:22:18.878367   20778 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 14:22:18.878806   20778 start.go:159] libmachine.API.Create for "multinode-359000" (driver="docker")
	I0223 14:22:18.878852   20778 client.go:168] LocalClient.Create starting
	I0223 14:22:18.879060   20778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem
	I0223 14:22:18.879150   20778 main.go:141] libmachine: Decoding PEM data...
	I0223 14:22:18.879185   20778 main.go:141] libmachine: Parsing certificate...
	I0223 14:22:18.879303   20778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem
	I0223 14:22:18.879369   20778 main.go:141] libmachine: Decoding PEM data...
	I0223 14:22:18.879388   20778 main.go:141] libmachine: Parsing certificate...
	I0223 14:22:18.880263   20778 cli_runner.go:164] Run: docker network inspect multinode-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 14:22:18.935792   20778 cli_runner.go:211] docker network inspect multinode-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 14:22:18.935899   20778 network_create.go:281] running [docker network inspect multinode-359000] to gather additional debugging logs...
	I0223 14:22:18.935917   20778 cli_runner.go:164] Run: docker network inspect multinode-359000
	W0223 14:22:18.989285   20778 cli_runner.go:211] docker network inspect multinode-359000 returned with exit code 1
	I0223 14:22:18.989313   20778 network_create.go:284] error running [docker network inspect multinode-359000]: docker network inspect multinode-359000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-359000
	I0223 14:22:18.989328   20778 network_create.go:286] output of [docker network inspect multinode-359000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-359000
	
	** /stderr **
	I0223 14:22:18.989400   20778 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 14:22:19.045130   20778 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 14:22:19.045485   20778 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006ef790}
	I0223 14:22:19.045498   20778 network_create.go:123] attempt to create docker network multinode-359000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 14:22:19.045572   20778 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-359000 multinode-359000
	I0223 14:22:19.132062   20778 network_create.go:107] docker network multinode-359000 192.168.58.0/24 created
	I0223 14:22:19.132091   20778 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-359000" container
	I0223 14:22:19.132246   20778 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 14:22:19.186179   20778 cli_runner.go:164] Run: docker volume create multinode-359000 --label name.minikube.sigs.k8s.io=multinode-359000 --label created_by.minikube.sigs.k8s.io=true
	I0223 14:22:19.240886   20778 oci.go:103] Successfully created a docker volume multinode-359000
	I0223 14:22:19.241033   20778 cli_runner.go:164] Run: docker run --rm --name multinode-359000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-359000 --entrypoint /usr/bin/test -v multinode-359000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 14:22:19.665975   20778 oci.go:107] Successfully prepared a docker volume multinode-359000
	I0223 14:22:19.666026   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:22:19.666040   20778 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 14:22:19.666151   20778 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-359000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 14:22:25.631787   20778 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-359000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (5.965534313s)
	I0223 14:22:25.631808   20778 kic.go:199] duration metric: took 5.965735 seconds to extract preloaded images to volume
	I0223 14:22:25.631932   20778 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 14:22:25.772664   20778 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-359000 --name multinode-359000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-359000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-359000 --network multinode-359000 --ip 192.168.58.2 --volume multinode-359000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 14:22:26.119144   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Running}}
	I0223 14:22:26.178445   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:26.240245   20778 cli_runner.go:164] Run: docker exec multinode-359000 stat /var/lib/dpkg/alternatives/iptables
	I0223 14:22:26.348772   20778 oci.go:144] the created container "multinode-359000" has a running status.
	I0223 14:22:26.348801   20778 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa...
	I0223 14:22:26.567287   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 14:22:26.567362   20778 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 14:22:26.670716   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:26.727049   20778 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 14:22:26.727069   20778 kic_runner.go:114] Args: [docker exec --privileged multinode-359000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 14:22:26.831090   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:26.886450   20778 machine.go:88] provisioning docker machine ...
	I0223 14:22:26.886505   20778 ubuntu.go:169] provisioning hostname "multinode-359000"
	I0223 14:22:26.886606   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:26.972174   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:26.972577   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:26.972595   20778 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-359000 && echo "multinode-359000" | sudo tee /etc/hostname
	I0223 14:22:27.114547   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-359000
	
	I0223 14:22:27.114642   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:27.170811   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:27.171170   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:27.171184   20778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-359000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-359000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-359000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:22:27.305059   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:22:27.305087   20778 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:22:27.305103   20778 ubuntu.go:177] setting up certificates
	I0223 14:22:27.305108   20778 provision.go:83] configureAuth start
	I0223 14:22:27.305183   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000
	I0223 14:22:27.361348   20778 provision.go:138] copyHostCerts
	I0223 14:22:27.361394   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:22:27.361454   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:22:27.361463   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:22:27.361583   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:22:27.361754   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:22:27.361785   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:22:27.361790   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:22:27.361854   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:22:27.361973   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:22:27.362008   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:22:27.362012   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:22:27.362074   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:22:27.362198   20778 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.multinode-359000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-359000]
	I0223 14:22:27.440484   20778 provision.go:172] copyRemoteCerts
	I0223 14:22:27.440541   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:22:27.440600   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:27.497303   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:27.590994   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 14:22:27.591094   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:22:27.607997   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 14:22:27.608078   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 14:22:27.624915   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 14:22:27.624995   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 14:22:27.641981   20778 provision.go:86] duration metric: configureAuth took 336.859361ms
	I0223 14:22:27.641996   20778 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:22:27.642158   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:22:27.642228   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:27.698110   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:27.698461   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:27.698477   20778 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:22:27.831738   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:22:27.831759   20778 ubuntu.go:71] root file system type: overlay
	I0223 14:22:27.831846   20778 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:22:27.831933   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:27.889076   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:27.889472   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:27.889521   20778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:22:28.033987   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:22:28.034099   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:28.090960   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:28.091310   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:28.091323   20778 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:22:28.692040   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:22:28.032036957 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 14:22:28.692065   20778 machine.go:91] provisioned docker machine in 1.805585984s
	I0223 14:22:28.692071   20778 client.go:171] LocalClient.Create took 9.813156767s
	I0223 14:22:28.692087   20778 start.go:167] duration metric: libmachine.API.Create for "multinode-359000" took 9.813228955s
	I0223 14:22:28.692096   20778 start.go:300] post-start starting for "multinode-359000" (driver="docker")
	I0223 14:22:28.692101   20778 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:22:28.692176   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:22:28.692231   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:28.750799   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:28.847120   20778 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:22:28.850824   20778 command_runner.go:130] > NAME="Ubuntu"
	I0223 14:22:28.850833   20778 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 14:22:28.850837   20778 command_runner.go:130] > ID=ubuntu
	I0223 14:22:28.850853   20778 command_runner.go:130] > ID_LIKE=debian
	I0223 14:22:28.850864   20778 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 14:22:28.850868   20778 command_runner.go:130] > VERSION_ID="20.04"
	I0223 14:22:28.850872   20778 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 14:22:28.850877   20778 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 14:22:28.850881   20778 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 14:22:28.850894   20778 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 14:22:28.850898   20778 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 14:22:28.850902   20778 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 14:22:28.850946   20778 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:22:28.850960   20778 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:22:28.850966   20778 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:22:28.850971   20778 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:22:28.850981   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:22:28.851081   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:22:28.851269   20778 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:22:28.851276   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /etc/ssl/certs/152102.pem
	I0223 14:22:28.851475   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:22:28.858643   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:22:28.875607   20778 start.go:303] post-start completed in 183.501003ms
	I0223 14:22:28.876132   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000
	I0223 14:22:28.933182   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:22:28.933592   20778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:22:28.933655   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:28.989909   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:29.081987   20778 command_runner.go:130] > 9%!
	(MISSING)I0223 14:22:29.082063   20778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:22:29.086343   20778 command_runner.go:130] > 51G
	I0223 14:22:29.086706   20778 start.go:128] duration metric: createHost completed in 10.231226324s
	I0223 14:22:29.086721   20778 start.go:83] releasing machines lock for "multinode-359000", held for 10.231334204s
	I0223 14:22:29.086800   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000
	I0223 14:22:29.143690   20778 ssh_runner.go:195] Run: cat /version.json
	I0223 14:22:29.143707   20778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 14:22:29.143766   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:29.143782   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:29.202449   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:29.203618   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:29.345068   20778 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 14:22:29.346336   20778 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0223 14:22:29.346461   20778 ssh_runner.go:195] Run: systemctl --version
	I0223 14:22:29.350887   20778 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0223 14:22:29.350907   20778 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0223 14:22:29.351192   20778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 14:22:29.356299   20778 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 14:22:29.356312   20778 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 14:22:29.356321   20778 command_runner.go:130] > Device: a6h/166d	Inode: 269040      Links: 1
	I0223 14:22:29.356330   20778 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 14:22:29.356338   20778 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 14:22:29.356342   20778 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 14:22:29.356346   20778 command_runner.go:130] > Change: 2023-02-23 21:59:23.933961994 +0000
	I0223 14:22:29.356350   20778 command_runner.go:130] >  Birth: -
	I0223 14:22:29.356645   20778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 14:22:29.376246   20778 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 14:22:29.376339   20778 ssh_runner.go:195] Run: which cri-dockerd
	I0223 14:22:29.379965   20778 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 14:22:29.380147   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 14:22:29.387517   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 14:22:29.399953   20778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 14:22:29.414326   20778 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 14:22:29.414354   20778 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 14:22:29.414368   20778 start.go:485] detecting cgroup driver to use...
	I0223 14:22:29.414380   20778 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:22:29.414464   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:22:29.426735   20778 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 14:22:29.426747   20778 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 14:22:29.427597   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 14:22:29.436078   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:22:29.444330   20778 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:22:29.444382   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:22:29.452683   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:22:29.460836   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:22:29.469038   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:22:29.477245   20778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:22:29.485018   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:22:29.493417   20778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:22:29.499949   20778 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 14:22:29.500616   20778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:22:29.507472   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:22:29.570144   20778 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:22:29.645730   20778 start.go:485] detecting cgroup driver to use...
	I0223 14:22:29.645748   20778 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:22:29.645806   20778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:22:29.655468   20778 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 14:22:29.655662   20778 command_runner.go:130] > [Unit]
	I0223 14:22:29.655677   20778 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 14:22:29.655685   20778 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 14:22:29.655689   20778 command_runner.go:130] > BindsTo=containerd.service
	I0223 14:22:29.655695   20778 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 14:22:29.655700   20778 command_runner.go:130] > Wants=network-online.target
	I0223 14:22:29.655706   20778 command_runner.go:130] > Requires=docker.socket
	I0223 14:22:29.655710   20778 command_runner.go:130] > StartLimitBurst=3
	I0223 14:22:29.655714   20778 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 14:22:29.655717   20778 command_runner.go:130] > [Service]
	I0223 14:22:29.655720   20778 command_runner.go:130] > Type=notify
	I0223 14:22:29.655724   20778 command_runner.go:130] > Restart=on-failure
	I0223 14:22:29.655730   20778 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 14:22:29.655739   20778 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 14:22:29.655744   20778 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 14:22:29.655749   20778 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 14:22:29.655756   20778 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 14:22:29.655763   20778 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 14:22:29.655769   20778 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 14:22:29.655779   20778 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 14:22:29.655787   20778 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 14:22:29.655793   20778 command_runner.go:130] > ExecStart=
	I0223 14:22:29.655810   20778 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 14:22:29.655815   20778 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 14:22:29.655820   20778 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 14:22:29.655825   20778 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 14:22:29.655828   20778 command_runner.go:130] > LimitNOFILE=infinity
	I0223 14:22:29.655832   20778 command_runner.go:130] > LimitNPROC=infinity
	I0223 14:22:29.655835   20778 command_runner.go:130] > LimitCORE=infinity
	I0223 14:22:29.655839   20778 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 14:22:29.655844   20778 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 14:22:29.655848   20778 command_runner.go:130] > TasksMax=infinity
	I0223 14:22:29.655851   20778 command_runner.go:130] > TimeoutStartSec=0
	I0223 14:22:29.655856   20778 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 14:22:29.655860   20778 command_runner.go:130] > Delegate=yes
	I0223 14:22:29.655866   20778 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 14:22:29.655869   20778 command_runner.go:130] > KillMode=process
	I0223 14:22:29.655876   20778 command_runner.go:130] > [Install]
	I0223 14:22:29.655881   20778 command_runner.go:130] > WantedBy=multi-user.target
	I0223 14:22:29.656318   20778 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:22:29.656376   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:22:29.666346   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:22:29.679412   20778 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 14:22:29.679436   20778 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 14:22:29.680190   20778 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:22:29.785998   20778 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:22:29.846079   20778 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:22:29.846099   20778 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:22:29.875291   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:22:29.936773   20778 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:22:30.178429   20778 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:22:30.245350   20778 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 14:22:30.245421   20778 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 14:22:30.311955   20778 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:22:30.376482   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:22:30.442762   20778 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 14:22:30.453624   20778 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 14:22:30.453706   20778 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 14:22:30.457497   20778 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 14:22:30.457507   20778 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 14:22:30.457512   20778 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0223 14:22:30.457518   20778 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 14:22:30.457525   20778 command_runner.go:130] > Access: 2023-02-23 22:22:30.450036934 +0000
	I0223 14:22:30.457529   20778 command_runner.go:130] > Modify: 2023-02-23 22:22:30.450036934 +0000
	I0223 14:22:30.457534   20778 command_runner.go:130] > Change: 2023-02-23 22:22:30.451036933 +0000
	I0223 14:22:30.457543   20778 command_runner.go:130] >  Birth: -
	I0223 14:22:30.457558   20778 start.go:553] Will wait 60s for crictl version
	I0223 14:22:30.457593   20778 ssh_runner.go:195] Run: which crictl
	I0223 14:22:30.461212   20778 command_runner.go:130] > /usr/bin/crictl
	I0223 14:22:30.461380   20778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 14:22:30.552540   20778 command_runner.go:130] > Version:  0.1.0
	I0223 14:22:30.552553   20778 command_runner.go:130] > RuntimeName:  docker
	I0223 14:22:30.552557   20778 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 14:22:30.552564   20778 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 14:22:30.554412   20778 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 14:22:30.554492   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:22:30.578365   20778 command_runner.go:130] > 23.0.1
	I0223 14:22:30.579998   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:22:30.602894   20778 command_runner.go:130] > 23.0.1
	I0223 14:22:30.651359   20778 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 14:22:30.651602   20778 cli_runner.go:164] Run: docker exec -t multinode-359000 dig +short host.docker.internal
	I0223 14:22:30.770066   20778 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:22:30.770179   20778 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:22:30.774553   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:22:30.784573   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:30.841215   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:22:30.841297   20778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:22:30.861049   20778 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 14:22:30.861071   20778 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 14:22:30.861075   20778 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 14:22:30.861081   20778 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 14:22:30.861086   20778 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 14:22:30.861091   20778 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 14:22:30.861095   20778 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 14:22:30.861102   20778 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 14:22:30.861138   20778 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 14:22:30.861150   20778 docker.go:560] Images already preloaded, skipping extraction
	I0223 14:22:30.861250   20778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:22:30.880076   20778 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 14:22:30.880088   20778 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 14:22:30.880093   20778 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 14:22:30.880098   20778 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 14:22:30.880104   20778 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 14:22:30.880109   20778 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 14:22:30.880114   20778 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 14:22:30.880121   20778 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 14:22:30.881672   20778 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 14:22:30.881684   20778 cache_images.go:84] Images are preloaded, skipping loading
	I0223 14:22:30.881780   20778 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:22:30.905465   20778 command_runner.go:130] > cgroupfs
	I0223 14:22:30.907014   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:22:30.907027   20778 cni.go:136] 1 nodes found, recommending kindnet
	I0223 14:22:30.907042   20778 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:22:30.907057   20778 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-359000 NodeName:multinode-359000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:22:30.907180   20778 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-359000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:22:30.907244   20778 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-359000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:22:30.907308   20778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 14:22:30.914358   20778 command_runner.go:130] > kubeadm
	I0223 14:22:30.914366   20778 command_runner.go:130] > kubectl
	I0223 14:22:30.914370   20778 command_runner.go:130] > kubelet
	I0223 14:22:30.914948   20778 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:22:30.915008   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 14:22:30.922349   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0223 14:22:30.934893   20778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 14:22:30.947559   20778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0223 14:22:30.960493   20778 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:22:30.964355   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:22:30.974018   20778 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000 for IP: 192.168.58.2
	I0223 14:22:30.974034   20778 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:30.974214   20778 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:22:30.974298   20778 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:22:30.974352   20778 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key
	I0223 14:22:30.974367   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt with IP's: []
	I0223 14:22:31.058307   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt ...
	I0223 14:22:31.058316   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt: {Name:mka52b9e77c478dfe5439016c20d5225efaad9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.058594   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key ...
	I0223 14:22:31.058601   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key: {Name:mkf0e7dd49748712552fa7819d7d2db125545e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.058782   20778 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key.cee25041
	I0223 14:22:31.058797   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 14:22:31.127584   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt.cee25041 ...
	I0223 14:22:31.127591   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt.cee25041: {Name:mk5f961080e03220b9f67a4e8170b55a83081e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.127923   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key.cee25041 ...
	I0223 14:22:31.127934   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key.cee25041: {Name:mkce59497be2b7607371982625aeaaad62aa9126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.128139   20778 certs.go:333] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt
	I0223 14:22:31.128295   20778 certs.go:337] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key
	I0223 14:22:31.128459   20778 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key
	I0223 14:22:31.128476   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt with IP's: []
	I0223 14:22:31.244647   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt ...
	I0223 14:22:31.244657   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt: {Name:mk32899bae51507ea9dcc625c110d92663d55316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.244911   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key ...
	I0223 14:22:31.244919   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key: {Name:mk5cdc2c98d324e290734ba0dd697285f9a4e252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.245116   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 14:22:31.245148   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 14:22:31.245170   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 14:22:31.245194   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 14:22:31.245215   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 14:22:31.245236   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 14:22:31.245255   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 14:22:31.245276   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 14:22:31.245374   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:22:31.245426   20778 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:22:31.245439   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:22:31.245477   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:22:31.245511   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:22:31.245542   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:22:31.245618   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:22:31.245650   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem -> /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.245680   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.245703   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.246268   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 14:22:31.264226   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 14:22:31.281261   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 14:22:31.298018   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 14:22:31.315069   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:22:31.332043   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:22:31.348982   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:22:31.366571   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:22:31.383857   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:22:31.400819   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:22:31.417731   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:22:31.434612   20778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 14:22:31.447186   20778 ssh_runner.go:195] Run: openssl version
	I0223 14:22:31.452279   20778 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 14:22:31.452666   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:22:31.460822   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.464726   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.464859   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.464901   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.469974   20778 command_runner.go:130] > b5213941
	I0223 14:22:31.470306   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:22:31.478394   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:22:31.486497   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.490349   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.490446   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.490494   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.495755   20778 command_runner.go:130] > 51391683
	I0223 14:22:31.495989   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:22:31.504022   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:22:31.511980   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.515839   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.515984   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.516030   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.521095   20778 command_runner.go:130] > 3ec20f2e
	I0223 14:22:31.521439   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:22:31.529353   20778 kubeadm.go:401] StartCluster: {Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:22:31.529455   20778 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:22:31.548402   20778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 14:22:31.556334   20778 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0223 14:22:31.556346   20778 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0223 14:22:31.556351   20778 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0223 14:22:31.556412   20778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 14:22:31.563836   20778 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 14:22:31.563891   20778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:22:31.571109   20778 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 14:22:31.571121   20778 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 14:22:31.571127   20778 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 14:22:31.571150   20778 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:22:31.571175   20778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:22:31.571195   20778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 14:22:31.622387   20778 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 14:22:31.622401   20778 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0223 14:22:31.622432   20778 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 14:22:31.622437   20778 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 14:22:31.726583   20778 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:22:31.726595   20778 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:22:31.726669   20778 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:22:31.726680   20778 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:22:31.726763   20778 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:22:31.726770   20778 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:22:31.853732   20778 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:22:31.853745   20778 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:22:31.875623   20778 out.go:204]   - Generating certificates and keys ...
	I0223 14:22:31.875691   20778 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 14:22:31.875718   20778 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 14:22:31.875784   20778 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 14:22:31.875796   20778 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 14:22:31.918241   20778 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 14:22:31.918250   20778 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 14:22:32.160845   20778 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 14:22:32.160859   20778 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0223 14:22:32.470893   20778 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 14:22:32.470918   20778 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0223 14:22:32.540261   20778 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 14:22:32.540269   20778 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0223 14:22:32.773035   20778 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 14:22:32.773050   20778 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0223 14:22:32.773252   20778 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-359000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 14:22:32.773263   20778 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-359000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 14:22:32.999464   20778 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 14:22:32.999473   20778 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0223 14:22:33.020525   20778 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-359000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 14:22:33.020536   20778 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-359000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 14:22:33.110920   20778 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 14:22:33.110926   20778 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 14:22:33.222781   20778 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 14:22:33.222791   20778 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 14:22:33.369263   20778 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 14:22:33.369275   20778 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0223 14:22:33.369317   20778 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:22:33.369328   20778 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:22:33.503536   20778 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:22:33.503551   20778 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:22:33.596312   20778 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:22:33.596328   20778 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:22:33.813896   20778 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:22:33.813908   20778 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:22:33.967258   20778 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:22:33.967271   20778 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:22:33.977422   20778 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:22:33.977439   20778 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:22:33.978095   20778 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:22:33.978101   20778 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:22:33.978133   20778 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 14:22:33.978139   20778 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 14:22:34.049610   20778 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:22:34.049621   20778 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:22:34.071313   20778 out.go:204]   - Booting up control plane ...
	I0223 14:22:34.071390   20778 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:22:34.071399   20778 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:22:34.071473   20778 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:22:34.071479   20778 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:22:34.071536   20778 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:22:34.071548   20778 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:22:34.071627   20778 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:22:34.071634   20778 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:22:34.071761   20778 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:22:34.071768   20778 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:22:42.058654   20778 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002264 seconds
	I0223 14:22:42.058678   20778 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002264 seconds
	I0223 14:22:42.058823   20778 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 14:22:42.058830   20778 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 14:22:42.066794   20778 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 14:22:42.066812   20778 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 14:22:42.583785   20778 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 14:22:42.583795   20778 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0223 14:22:42.583942   20778 kubeadm.go:322] [mark-control-plane] Marking the node multinode-359000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 14:22:42.583949   20778 command_runner.go:130] > [mark-control-plane] Marking the node multinode-359000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 14:22:43.092233   20778 kubeadm.go:322] [bootstrap-token] Using token: a3m378.esw3wxqjqraswiei
	I0223 14:22:43.092252   20778 command_runner.go:130] > [bootstrap-token] Using token: a3m378.esw3wxqjqraswiei
	I0223 14:22:43.129545   20778 out.go:204]   - Configuring RBAC rules ...
	I0223 14:22:43.129705   20778 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 14:22:43.129719   20778 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 14:22:43.131996   20778 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 14:22:43.132005   20778 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 14:22:43.136844   20778 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 14:22:43.136858   20778 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 14:22:43.138991   20778 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 14:22:43.139005   20778 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 14:22:43.141911   20778 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 14:22:43.141928   20778 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 14:22:43.144405   20778 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 14:22:43.144417   20778 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 14:22:43.152145   20778 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 14:22:43.152161   20778 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 14:22:43.283855   20778 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 14:22:43.283869   20778 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 14:22:43.570405   20778 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 14:22:43.570428   20778 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 14:22:43.570787   20778 kubeadm.go:322] 
	I0223 14:22:43.570836   20778 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 14:22:43.570845   20778 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0223 14:22:43.570854   20778 kubeadm.go:322] 
	I0223 14:22:43.570923   20778 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 14:22:43.570932   20778 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0223 14:22:43.570941   20778 kubeadm.go:322] 
	I0223 14:22:43.570964   20778 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 14:22:43.570973   20778 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0223 14:22:43.571016   20778 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 14:22:43.571022   20778 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 14:22:43.571068   20778 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 14:22:43.571077   20778 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 14:22:43.571082   20778 kubeadm.go:322] 
	I0223 14:22:43.571126   20778 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 14:22:43.571134   20778 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0223 14:22:43.571143   20778 kubeadm.go:322] 
	I0223 14:22:43.571191   20778 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 14:22:43.571197   20778 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 14:22:43.571201   20778 kubeadm.go:322] 
	I0223 14:22:43.571253   20778 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 14:22:43.571261   20778 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0223 14:22:43.571328   20778 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 14:22:43.571335   20778 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 14:22:43.571395   20778 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 14:22:43.571399   20778 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 14:22:43.571408   20778 kubeadm.go:322] 
	I0223 14:22:43.571484   20778 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 14:22:43.571490   20778 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0223 14:22:43.571552   20778 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 14:22:43.571558   20778 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0223 14:22:43.571567   20778 kubeadm.go:322] 
	I0223 14:22:43.571634   20778 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token a3m378.esw3wxqjqraswiei \
	I0223 14:22:43.571638   20778 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a3m378.esw3wxqjqraswiei \
	I0223 14:22:43.571719   20778 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 \
	I0223 14:22:43.571721   20778 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 \
	I0223 14:22:43.571742   20778 command_runner.go:130] > 	--control-plane 
	I0223 14:22:43.571747   20778 kubeadm.go:322] 	--control-plane 
	I0223 14:22:43.571755   20778 kubeadm.go:322] 
	I0223 14:22:43.571823   20778 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0223 14:22:43.571824   20778 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 14:22:43.571833   20778 kubeadm.go:322] 
	I0223 14:22:43.571909   20778 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a3m378.esw3wxqjqraswiei \
	I0223 14:22:43.571918   20778 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a3m378.esw3wxqjqraswiei \
	I0223 14:22:43.572005   20778 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 
	I0223 14:22:43.572012   20778 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 
	I0223 14:22:43.575110   20778 kubeadm.go:322] W0223 22:22:31.615362    1296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 14:22:43.575115   20778 command_runner.go:130] ! W0223 22:22:31.615362    1296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 14:22:43.575244   20778 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 14:22:43.575257   20778 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 14:22:43.575362   20778 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:22:43.575371   20778 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:22:43.575384   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:22:43.575393   20778 cni.go:136] 1 nodes found, recommending kindnet
	I0223 14:22:43.636248   20778 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 14:22:43.658313   20778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 14:22:43.664670   20778 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 14:22:43.664687   20778 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 14:22:43.664695   20778 command_runner.go:130] > Device: a6h/166d	Inode: 267127      Links: 1
	I0223 14:22:43.664703   20778 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 14:22:43.664715   20778 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 14:22:43.664723   20778 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 14:22:43.664729   20778 command_runner.go:130] > Change: 2023-02-23 21:59:23.284856714 +0000
	I0223 14:22:43.664734   20778 command_runner.go:130] >  Birth: -
	I0223 14:22:43.664784   20778 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 14:22:43.664795   20778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 14:22:43.678783   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 14:22:44.187994   20778 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0223 14:22:44.192238   20778 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0223 14:22:44.198612   20778 command_runner.go:130] > serviceaccount/kindnet created
	I0223 14:22:44.205598   20778 command_runner.go:130] > daemonset.apps/kindnet created
	I0223 14:22:44.211476   20778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 14:22:44.211563   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:44.211565   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0 minikube.k8s.io/name=multinode-359000 minikube.k8s.io/updated_at=2023_02_23T14_22_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:44.219349   20778 command_runner.go:130] > -16
	I0223 14:22:44.219386   20778 ops.go:34] apiserver oom_adj: -16
	I0223 14:22:44.308996   20778 command_runner.go:130] > node/multinode-359000 labeled
	I0223 14:22:44.309042   20778 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0223 14:22:44.309151   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:44.388721   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:44.888914   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:44.951937   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:45.390933   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:45.450655   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:45.891037   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:45.956320   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:46.389803   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:46.452876   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:46.890085   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:46.954791   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:47.389812   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:47.453029   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:47.891091   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:47.951811   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:48.389152   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:48.453297   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:48.891103   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:48.956666   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:49.390010   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:49.454455   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:49.890013   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:49.954025   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:50.390149   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:50.454538   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:50.889982   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:50.950898   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:51.389860   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:51.450032   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:51.890910   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:51.955754   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:52.389061   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:52.481858   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:52.889693   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:52.950050   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:53.389031   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:53.452988   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:53.890471   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:53.952957   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:54.390475   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:54.452099   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:54.889587   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:55.008076   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:55.389115   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:55.455382   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:55.889025   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:55.947614   20778 command_runner.go:130] > NAME      SECRETS   AGE
	I0223 14:22:55.947633   20778 command_runner.go:130] > default   0         0s
	I0223 14:22:55.950686   20778 kubeadm.go:1073] duration metric: took 11.739123889s to wait for elevateKubeSystemPrivileges.
	I0223 14:22:55.950705   20778 kubeadm.go:403] StartCluster complete in 24.421221883s
	I0223 14:22:55.950722   20778 settings.go:142] acquiring lock: {Name:mk5254606ab776d081c4c857df8d4e00b86fee57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:55.950813   20778 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:55.951298   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/kubeconfig: {Name:mk366c13f6069774a57c4d74123d5172c8522a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:55.951575   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 14:22:55.951593   20778 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 14:22:55.951656   20778 addons.go:65] Setting storage-provisioner=true in profile "multinode-359000"
	I0223 14:22:55.951677   20778 addons.go:227] Setting addon storage-provisioner=true in "multinode-359000"
	I0223 14:22:55.951681   20778 addons.go:65] Setting default-storageclass=true in profile "multinode-359000"
	I0223 14:22:55.951709   20778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-359000"
	I0223 14:22:55.951721   20778 host.go:66] Checking if "multinode-359000" exists ...
	I0223 14:22:55.951729   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:22:55.951797   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:55.951982   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:55.952053   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:55.952056   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:22:55.956474   20778 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 14:22:55.956759   20778 round_trippers.go:463] GET https://127.0.0.1:58734/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 14:22:55.956769   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:55.956777   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:55.956782   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:55.965593   20778 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0223 14:22:55.965610   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:55.965616   20778 round_trippers.go:580]     Audit-Id: 9755b856-a8b2-4aa2-922a-a5a3c26ffa99
	I0223 14:22:55.965621   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:55.965626   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:55.965630   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:55.965635   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:55.965640   20778 round_trippers.go:580]     Content-Length: 291
	I0223 14:22:55.965644   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:55 GMT
	I0223 14:22:55.965667   20778 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"324","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 14:22:55.966004   20778 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"324","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 14:22:55.966030   20778 round_trippers.go:463] PUT https://127.0.0.1:58734/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 14:22:55.966035   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:55.966041   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:55.966050   20778 round_trippers.go:473]     Content-Type: application/json
	I0223 14:22:55.966077   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:55.971574   20778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 14:22:55.971605   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:55.971615   20778 round_trippers.go:580]     Audit-Id: bb70b539-cee3-4d6c-bfb5-0bc20b00b073
	I0223 14:22:55.971623   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:55.971631   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:55.971639   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:55.971646   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:55.971655   20778 round_trippers.go:580]     Content-Length: 291
	I0223 14:22:55.971678   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:55 GMT
	I0223 14:22:55.971709   20778 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"337","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 14:22:56.022076   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:56.044690   20778 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 14:22:56.044967   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:22:56.065956   20778 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 14:22:56.065973   20778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 14:22:56.066086   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:56.067183   20778 round_trippers.go:463] GET https://127.0.0.1:58734/apis/storage.k8s.io/v1/storageclasses
	I0223 14:22:56.067248   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:56.067271   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:56.067285   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:56.070562   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:22:56.070594   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:56.070605   20778 round_trippers.go:580]     Audit-Id: 703452e3-e644-4b28-a2e0-31732cff6011
	I0223 14:22:56.070616   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:56.070627   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:56.070636   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:56.070669   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:56.070676   20778 round_trippers.go:580]     Content-Length: 109
	I0223 14:22:56.070681   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:56 GMT
	I0223 14:22:56.070710   20778 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"350"},"items":[]}
	I0223 14:22:56.071158   20778 addons.go:227] Setting addon default-storageclass=true in "multinode-359000"
	I0223 14:22:56.071185   20778 host.go:66] Checking if "multinode-359000" exists ...
	I0223 14:22:56.071743   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:56.096240   20778 command_runner.go:130] > apiVersion: v1
	I0223 14:22:56.096268   20778 command_runner.go:130] > data:
	I0223 14:22:56.096275   20778 command_runner.go:130] >   Corefile: |
	I0223 14:22:56.096284   20778 command_runner.go:130] >     .:53 {
	I0223 14:22:56.096291   20778 command_runner.go:130] >         errors
	I0223 14:22:56.096301   20778 command_runner.go:130] >         health {
	I0223 14:22:56.096310   20778 command_runner.go:130] >            lameduck 5s
	I0223 14:22:56.096321   20778 command_runner.go:130] >         }
	I0223 14:22:56.096332   20778 command_runner.go:130] >         ready
	I0223 14:22:56.096347   20778 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 14:22:56.096356   20778 command_runner.go:130] >            pods insecure
	I0223 14:22:56.096365   20778 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 14:22:56.096377   20778 command_runner.go:130] >            ttl 30
	I0223 14:22:56.096387   20778 command_runner.go:130] >         }
	I0223 14:22:56.096398   20778 command_runner.go:130] >         prometheus :9153
	I0223 14:22:56.096408   20778 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 14:22:56.096424   20778 command_runner.go:130] >            max_concurrent 1000
	I0223 14:22:56.096434   20778 command_runner.go:130] >         }
	I0223 14:22:56.096440   20778 command_runner.go:130] >         cache 30
	I0223 14:22:56.096448   20778 command_runner.go:130] >         loop
	I0223 14:22:56.096456   20778 command_runner.go:130] >         reload
	I0223 14:22:56.096475   20778 command_runner.go:130] >         loadbalance
	I0223 14:22:56.096487   20778 command_runner.go:130] >     }
	I0223 14:22:56.096495   20778 command_runner.go:130] > kind: ConfigMap
	I0223 14:22:56.096505   20778 command_runner.go:130] > metadata:
	I0223 14:22:56.096513   20778 command_runner.go:130] >   creationTimestamp: "2023-02-23T22:22:43Z"
	I0223 14:22:56.096517   20778 command_runner.go:130] >   name: coredns
	I0223 14:22:56.096520   20778 command_runner.go:130] >   namespace: kube-system
	I0223 14:22:56.096524   20778 command_runner.go:130] >   resourceVersion: "227"
	I0223 14:22:56.096529   20778 command_runner.go:130] >   uid: 0dcdd836-fb8b-4019-a423-111674db63b0
	I0223 14:22:56.096677   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 14:22:56.136049   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:56.140946   20778 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 14:22:56.140957   20778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 14:22:56.141022   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:56.204480   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:56.384454   20778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 14:22:56.471941   20778 round_trippers.go:463] GET https://127.0.0.1:58734/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 14:22:56.471956   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:56.471963   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:56.471968   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:56.474643   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:56.474659   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:56.474665   20778 round_trippers.go:580]     Audit-Id: 6f6367ee-0c5d-4f5d-82ba-3c28cdde7d4b
	I0223 14:22:56.474670   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:56.474674   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:56.474681   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:56.474685   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:56.474690   20778 round_trippers.go:580]     Content-Length: 291
	I0223 14:22:56.474695   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:56 GMT
	I0223 14:22:56.474709   20778 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"357","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 14:22:56.474762   20778 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-359000" context rescaled to 1 replicas
	I0223 14:22:56.474784   20778 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 14:22:56.497137   20778 out.go:177] * Verifying Kubernetes components...
	I0223 14:22:56.491673   20778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 14:22:56.518936   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:22:56.565793   20778 command_runner.go:130] > configmap/coredns replaced
	I0223 14:22:56.573373   20778 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0223 14:22:56.804214   20778 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0223 14:22:56.869433   20778 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0223 14:22:56.879395   20778 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 14:22:56.886010   20778 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 14:22:56.894549   20778 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0223 14:22:56.904426   20778 command_runner.go:130] > pod/storage-provisioner created
	I0223 14:22:56.993382   20778 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0223 14:22:57.000838   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:57.064092   20778 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0223 14:22:57.088118   20778 addons.go:492] enable addons completed in 1.136446823s: enabled=[storage-provisioner default-storageclass]
	I0223 14:22:57.098379   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:57.098617   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:22:57.098921   20778 node_ready.go:35] waiting up to 6m0s for node "multinode-359000" to be "Ready" ...
	I0223 14:22:57.098971   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:57.098976   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.098984   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.098989   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.101688   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.101704   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.101710   20778 round_trippers.go:580]     Audit-Id: b1325e75-bfa9-4729-8bfe-0d3efdc69ea4
	I0223 14:22:57.101717   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.101724   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.101732   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.101740   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.101744   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.101839   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:57.102385   20778 node_ready.go:49] node "multinode-359000" has status "Ready":"True"
	I0223 14:22:57.102397   20778 node_ready.go:38] duration metric: took 3.45935ms waiting for node "multinode-359000" to be "Ready" ...
	I0223 14:22:57.102408   20778 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 14:22:57.102462   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:22:57.102467   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.102474   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.102479   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.105284   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.105306   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.105318   20778 round_trippers.go:580]     Audit-Id: 76b96b0c-0813-4180-8a2c-e009bc0f8902
	I0223 14:22:57.105330   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.105336   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.105342   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.105349   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.105357   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.107064   20778 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"373"},"items":[{"metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"366","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 60224 chars]
	I0223 14:22:57.109910   20778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace to be "Ready" ...
	I0223 14:22:57.109970   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:57.109980   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.109991   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.110006   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.112565   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.112579   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.112585   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.112590   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.112595   20778 round_trippers.go:580]     Audit-Id: 72de2877-cc47-467a-aa0e-f88257433df4
	I0223 14:22:57.112600   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.112605   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.112613   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.112873   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"366","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 14:22:57.113185   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:57.113194   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.113203   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.113209   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.115459   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.115473   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.115482   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.115487   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.115492   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.115497   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.115501   20778 round_trippers.go:580]     Audit-Id: a98c7b94-e7b7-4e44-9172-e4152cd5312a
	I0223 14:22:57.115508   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.115919   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:57.617462   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:57.617481   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.617489   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.617495   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.620270   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.620287   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.620296   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.620314   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.620326   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.620337   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.620346   20778 round_trippers.go:580]     Audit-Id: 12df3f9f-e02b-486b-b807-d426af0f6a4f
	I0223 14:22:57.620354   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.620436   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"366","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 14:22:57.620739   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:57.620746   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.620754   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.620764   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.623093   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.623125   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.623139   20778 round_trippers.go:580]     Audit-Id: bb6b1185-383c-447d-b850-3ef227053c52
	I0223 14:22:57.623145   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.623150   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.623155   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.623160   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.623165   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.623234   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:58.116605   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:58.116619   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:58.116628   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:58.116636   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:58.119319   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:58.119336   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:58.119345   20778 round_trippers.go:580]     Audit-Id: d3e03daa-7868-4d12-8bf5-1a3c43154faa
	I0223 14:22:58.119357   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:58.119371   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:58.119383   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:58.119398   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:58.119408   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:58 GMT
	I0223 14:22:58.119558   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:22:58.119999   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:58.120007   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:58.120013   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:58.120019   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:58.123459   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:22:58.123474   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:58.123481   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:58.123488   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:58.123496   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:58.123503   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:58.123510   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:58 GMT
	I0223 14:22:58.123532   20778 round_trippers.go:580]     Audit-Id: 37f4c48e-dc5c-4ceb-afa8-536d12234f91
	I0223 14:22:58.123633   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:58.616567   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:58.616580   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:58.616586   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:58.616592   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:58.619786   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:22:58.619798   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:58.619804   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:58.619809   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:58.619817   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:58.619823   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:58.619829   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:58 GMT
	I0223 14:22:58.619834   20778 round_trippers.go:580]     Audit-Id: 46b232fc-9c34-4662-bd80-04513f011a74
	I0223 14:22:58.619897   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:22:58.620187   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:58.620193   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:58.620199   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:58.620220   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:58.622357   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:58.622368   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:58.622375   20778 round_trippers.go:580]     Audit-Id: d40a1ea7-d5ee-4eae-8d42-f15c3e5abe59
	I0223 14:22:58.622380   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:58.622386   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:58.622391   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:58.622396   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:58.622402   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:58 GMT
	I0223 14:22:58.622466   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:59.117558   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:59.117583   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:59.117609   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:59.117616   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:59.166085   20778 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0223 14:22:59.166116   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:59.166134   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:59.166149   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:59.166162   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:59 GMT
	I0223 14:22:59.166172   20778 round_trippers.go:580]     Audit-Id: d512c492-116f-4469-8d51-d958daabbc48
	I0223 14:22:59.166187   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:59.166223   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:59.166335   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:22:59.166767   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:59.166780   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:59.166792   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:59.166804   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:59.169614   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:59.169627   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:59.169632   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:59.169637   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:59 GMT
	I0223 14:22:59.169642   20778 round_trippers.go:580]     Audit-Id: dfc4ad9b-e930-4cb5-81c7-39fff481e2c0
	I0223 14:22:59.169647   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:59.169653   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:59.169670   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:59.169780   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:59.169977   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:22:59.617610   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:59.617636   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:59.617648   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:59.617657   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:59.622140   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:22:59.622154   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:59.622160   20778 round_trippers.go:580]     Audit-Id: 876e3638-bdb3-49ea-9b85-e6a8396cafb1
	I0223 14:22:59.622165   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:59.622173   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:59.622179   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:59.622184   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:59.622188   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:59 GMT
	I0223 14:22:59.622353   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:22:59.622656   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:59.622664   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:59.622670   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:59.622676   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:59.626069   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:22:59.626083   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:59.626089   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:59.626094   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:59.626099   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:59.626103   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:59 GMT
	I0223 14:22:59.626109   20778 round_trippers.go:580]     Audit-Id: b8645429-be5b-4965-aebc-0d74fa956510
	I0223 14:22:59.626116   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:59.626173   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:00.117460   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:00.117481   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:00.117493   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:00.117504   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:00.121949   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:00.121961   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:00.121966   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:00.121972   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:00 GMT
	I0223 14:23:00.121977   20778 round_trippers.go:580]     Audit-Id: f9de4328-090b-4692-840f-d31425d93d2f
	I0223 14:23:00.121982   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:00.121987   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:00.121991   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:00.122056   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:00.122350   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:00.122361   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:00.122370   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:00.122378   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:00.124459   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:00.124468   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:00.124474   20778 round_trippers.go:580]     Audit-Id: 043aea5d-b758-4e31-8026-6ac6e7581dc9
	I0223 14:23:00.124479   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:00.124484   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:00.124489   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:00.124494   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:00.124499   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:00 GMT
	I0223 14:23:00.124560   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:00.616773   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:00.616795   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:00.616808   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:00.616818   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:00.620827   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:00.620842   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:00.620853   20778 round_trippers.go:580]     Audit-Id: bfa5c5d3-f599-4232-8e24-bdb6fa7d6e12
	I0223 14:23:00.620860   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:00.620869   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:00.620880   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:00.620890   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:00.620897   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:00 GMT
	I0223 14:23:00.621287   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:00.621619   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:00.621626   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:00.621632   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:00.621637   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:00.624138   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:00.624148   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:00.624153   20778 round_trippers.go:580]     Audit-Id: bfa2a5fb-2ce3-4a30-a0b6-c839876a19a3
	I0223 14:23:00.624159   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:00.624164   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:00.624171   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:00.624177   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:00.624181   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:00 GMT
	I0223 14:23:00.624312   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:01.116501   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:01.116522   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:01.116534   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:01.116544   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:01.120076   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:01.120087   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:01.120093   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:01.120103   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:01.120109   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:01 GMT
	I0223 14:23:01.120114   20778 round_trippers.go:580]     Audit-Id: e973932f-7933-424c-b847-47909ecf17c8
	I0223 14:23:01.120119   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:01.120124   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:01.120204   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:01.120476   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:01.120482   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:01.120487   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:01.120493   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:01.122595   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:01.122605   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:01.122611   20778 round_trippers.go:580]     Audit-Id: 7fb05415-530f-4893-843e-84214d61a6ba
	I0223 14:23:01.122616   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:01.122621   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:01.122626   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:01.122633   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:01.122639   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:01 GMT
	I0223 14:23:01.122694   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:01.616518   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:01.616545   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:01.616558   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:01.616588   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:01.620565   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:01.620582   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:01.620590   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:01 GMT
	I0223 14:23:01.620597   20778 round_trippers.go:580]     Audit-Id: d1d6210b-433e-4a1a-bc9c-f7e88360446f
	I0223 14:23:01.620603   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:01.620611   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:01.620617   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:01.620630   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:01.620708   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:01.621014   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:01.621021   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:01.621027   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:01.621032   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:01.623254   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:01.623263   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:01.623269   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:01.623275   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:01.623281   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:01.623285   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:01 GMT
	I0223 14:23:01.623291   20778 round_trippers.go:580]     Audit-Id: 776e547a-b014-44cc-bcfa-75cfd6bcd88d
	I0223 14:23:01.623296   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:01.623348   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:01.623533   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:02.117176   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:02.117196   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:02.117208   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:02.117218   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:02.121190   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:02.121206   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:02.121214   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:02.121221   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:02.121228   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:02 GMT
	I0223 14:23:02.121235   20778 round_trippers.go:580]     Audit-Id: 93034275-ebc8-4d5b-9469-3775d80796e2
	I0223 14:23:02.121242   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:02.121248   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:02.121342   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:02.121639   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:02.121646   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:02.121654   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:02.121661   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:02.123801   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:02.123811   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:02.123817   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:02.123822   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:02 GMT
	I0223 14:23:02.123826   20778 round_trippers.go:580]     Audit-Id: 90f4eca5-6afc-4bc2-b987-2048a32e1711
	I0223 14:23:02.123831   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:02.123836   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:02.123840   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:02.124005   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:02.616338   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:02.616351   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:02.616358   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:02.616363   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:02.619284   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:02.619296   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:02.619303   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:02.619309   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:02.619314   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:02.619319   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:02 GMT
	I0223 14:23:02.619324   20778 round_trippers.go:580]     Audit-Id: ebadf671-d8a9-421e-a025-f38feaaa25f7
	I0223 14:23:02.619329   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:02.619423   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:02.619720   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:02.619727   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:02.619733   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:02.619741   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:02.621845   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:02.621856   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:02.621864   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:02 GMT
	I0223 14:23:02.621870   20778 round_trippers.go:580]     Audit-Id: 2579e22e-690e-4ebe-8b98-5ef9baab153a
	I0223 14:23:02.621880   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:02.621885   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:02.621890   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:02.621895   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:02.622181   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:03.116593   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:03.116607   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:03.116613   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:03.116619   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:03.119400   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:03.119412   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:03.119418   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:03.119422   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:03.119427   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:03.119432   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:03 GMT
	I0223 14:23:03.119438   20778 round_trippers.go:580]     Audit-Id: c40fb8e0-8a9c-4013-9441-26c3c5726c23
	I0223 14:23:03.119442   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:03.119504   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:03.119777   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:03.119783   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:03.119789   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:03.119795   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:03.121924   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:03.121934   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:03.121939   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:03 GMT
	I0223 14:23:03.121944   20778 round_trippers.go:580]     Audit-Id: 80e1b293-e280-4fa1-b7c1-54cffadaac26
	I0223 14:23:03.121949   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:03.121955   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:03.121960   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:03.121964   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:03.122160   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:03.616295   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:03.616308   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:03.616314   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:03.616319   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:03.665926   20778 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0223 14:23:03.665951   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:03.665963   20778 round_trippers.go:580]     Audit-Id: b00b7eda-0b90-4d3f-be92-30cc95ba7a30
	I0223 14:23:03.665973   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:03.665982   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:03.665991   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:03.666001   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:03.666011   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:03 GMT
	I0223 14:23:03.667392   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:03.667769   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:03.667778   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:03.667787   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:03.667798   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:03.670516   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:03.670528   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:03.670534   20778 round_trippers.go:580]     Audit-Id: f4e3b551-002a-4c6e-90e2-f228d8556662
	I0223 14:23:03.670538   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:03.670544   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:03.670549   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:03.670554   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:03.670558   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:03 GMT
	I0223 14:23:03.670641   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:03.670866   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:04.116550   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:04.116563   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:04.116570   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:04.116575   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:04.119043   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:04.119055   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:04.119061   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:04.119068   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:04.119073   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:04.119077   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:04 GMT
	I0223 14:23:04.119082   20778 round_trippers.go:580]     Audit-Id: 280237db-7115-4695-8818-15e643797b3f
	I0223 14:23:04.119088   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:04.119238   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:04.119533   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:04.119540   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:04.119545   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:04.119563   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:04.121777   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:04.121787   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:04.121794   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:04 GMT
	I0223 14:23:04.121801   20778 round_trippers.go:580]     Audit-Id: 0ad994bf-b3dc-4182-90b5-14e5b0791ee2
	I0223 14:23:04.121806   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:04.121811   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:04.121819   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:04.121823   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:04.122078   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:04.616299   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:04.616315   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:04.616322   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:04.616329   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:04.619170   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:04.619183   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:04.619189   20778 round_trippers.go:580]     Audit-Id: da4c45c0-eac5-4acd-acf2-b2c5e1bea699
	I0223 14:23:04.619195   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:04.619199   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:04.619204   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:04.619211   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:04.619221   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:04 GMT
	I0223 14:23:04.619311   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:04.619600   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:04.619608   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:04.619616   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:04.619624   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:04.623247   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:04.623261   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:04.623267   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:04.623272   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:04 GMT
	I0223 14:23:04.623277   20778 round_trippers.go:580]     Audit-Id: d9899891-ab02-453d-bf77-0c07b49ed368
	I0223 14:23:04.623281   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:04.623286   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:04.623293   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:04.623356   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:05.116395   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:05.116409   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:05.116416   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:05.116421   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:05.119323   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:05.119335   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:05.119347   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:05.119353   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:05 GMT
	I0223 14:23:05.119358   20778 round_trippers.go:580]     Audit-Id: ea15df90-12c7-41a6-9819-1a7e0d661048
	I0223 14:23:05.119363   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:05.119368   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:05.119373   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:05.119433   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:05.119714   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:05.119721   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:05.119727   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:05.119732   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:05.123537   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:05.123548   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:05.123554   20778 round_trippers.go:580]     Audit-Id: 4c0d5024-4521-462b-8e57-0878174b3c58
	I0223 14:23:05.123559   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:05.123565   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:05.123571   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:05.123577   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:05.123582   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:05 GMT
	I0223 14:23:05.123640   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:05.617165   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:05.617178   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:05.617184   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:05.617190   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:05.667196   20778 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0223 14:23:05.667220   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:05.667229   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:05 GMT
	I0223 14:23:05.667237   20778 round_trippers.go:580]     Audit-Id: e68ffa82-1ac6-437c-87d6-bd2d513155bd
	I0223 14:23:05.667244   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:05.667252   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:05.667259   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:05.667267   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:05.667815   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:05.668307   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:05.668316   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:05.668328   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:05.668336   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:05.670603   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:05.670634   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:05.670647   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:05.670658   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:05.670666   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:05.670673   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:05 GMT
	I0223 14:23:05.670683   20778 round_trippers.go:580]     Audit-Id: d1a4924d-0629-456b-ba4d-e98ee575d603
	I0223 14:23:05.670691   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:05.670927   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:05.671157   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:06.116435   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:06.116446   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:06.116453   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:06.116458   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:06.119403   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:06.119418   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:06.119425   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:06.119433   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:06.119440   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:06 GMT
	I0223 14:23:06.119445   20778 round_trippers.go:580]     Audit-Id: d838565a-236a-43ad-bf66-b30a8f4cbcf9
	I0223 14:23:06.119450   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:06.119455   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:06.119529   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:06.119813   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:06.119819   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:06.119825   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:06.119830   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:06.122032   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:06.122045   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:06.122054   20778 round_trippers.go:580]     Audit-Id: 0425c4a8-a4fb-4aca-86be-2f23c92ebaee
	I0223 14:23:06.122061   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:06.122070   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:06.122077   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:06.122086   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:06.122094   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:06 GMT
	I0223 14:23:06.122193   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:06.616360   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:06.616373   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:06.616387   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:06.616393   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:06.619341   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:06.619365   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:06.619380   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:06.619392   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:06.619401   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:06.619409   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:06.619416   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:06 GMT
	I0223 14:23:06.619424   20778 round_trippers.go:580]     Audit-Id: 5c60af4a-693a-4113-88bf-75b766692b45
	I0223 14:23:06.619498   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:06.619810   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:06.619818   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:06.619827   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:06.619835   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:06.622133   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:06.622144   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:06.622150   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:06.622156   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:06.622161   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:06.622166   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:06 GMT
	I0223 14:23:06.622174   20778 round_trippers.go:580]     Audit-Id: 7ead92d5-2ff4-4831-8772-ec872a778c2b
	I0223 14:23:06.622179   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:06.622236   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:07.116429   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:07.116443   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:07.116450   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:07.116455   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:07.119244   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:07.119258   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:07.119266   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:07.119273   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:07 GMT
	I0223 14:23:07.119280   20778 round_trippers.go:580]     Audit-Id: 4a62c324-bd03-417c-9db5-267cee771840
	I0223 14:23:07.119287   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:07.119301   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:07.119306   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:07.119372   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:07.119697   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:07.119706   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:07.119713   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:07.119721   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:07.165551   20778 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0223 14:23:07.165629   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:07.165658   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:07.165673   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:07.165686   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:07.165701   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:07.165718   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:07 GMT
	I0223 14:23:07.165736   20778 round_trippers.go:580]     Audit-Id: f139e90a-75ff-4306-a60e-636a3ffc350a
	I0223 14:23:07.166302   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:07.616565   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:07.616579   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:07.616585   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:07.616590   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:07.619324   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:07.619337   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:07.619343   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:07.619347   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:07.619352   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:07 GMT
	I0223 14:23:07.619358   20778 round_trippers.go:580]     Audit-Id: 370427bd-2242-4d13-b1d3-e2b6aeac6a3a
	I0223 14:23:07.619367   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:07.619373   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:07.619438   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:07.619743   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:07.619750   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:07.619755   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:07.619761   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:07.622169   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:07.622179   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:07.622186   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:07.622191   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:07 GMT
	I0223 14:23:07.622196   20778 round_trippers.go:580]     Audit-Id: cbe275c9-1278-4b10-b457-37a543e7e7c2
	I0223 14:23:07.622201   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:07.622206   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:07.622211   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:07.622275   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:08.116354   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:08.116368   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:08.116375   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:08.116380   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:08.119183   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:08.119197   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:08.119203   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:08.119208   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:08.119232   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:08 GMT
	I0223 14:23:08.119242   20778 round_trippers.go:580]     Audit-Id: 66560798-fa67-40a1-a845-7c2d35d698b5
	I0223 14:23:08.119252   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:08.119265   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:08.119478   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:08.119797   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:08.119807   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:08.119813   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:08.119819   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:08.121743   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:08.121753   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:08.121758   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:08.121764   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:08.121769   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:08 GMT
	I0223 14:23:08.121774   20778 round_trippers.go:580]     Audit-Id: 095d3abe-2a48-459f-a792-60e1737ab6b3
	I0223 14:23:08.121779   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:08.121784   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:08.122004   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:08.122187   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:08.616462   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:08.616485   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:08.616500   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:08.616512   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:08.666460   20778 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0223 14:23:08.666481   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:08.666490   20778 round_trippers.go:580]     Audit-Id: 27fe5585-7dc8-47ac-8e6c-3103cbb13ed7
	I0223 14:23:08.666498   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:08.666504   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:08.666511   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:08.666518   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:08.666525   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:08 GMT
	I0223 14:23:08.666610   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:08.667019   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:08.667028   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:08.667037   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:08.667044   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:08.669393   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:08.669404   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:08.669410   20778 round_trippers.go:580]     Audit-Id: 4b0f8274-dcfa-4308-86d6-0bb74d08d915
	I0223 14:23:08.669417   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:08.669424   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:08.669429   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:08.669435   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:08.669440   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:08 GMT
	I0223 14:23:08.669526   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:09.116392   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:09.116405   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:09.116412   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:09.116417   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:09.118927   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:09.118944   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:09.118954   20778 round_trippers.go:580]     Audit-Id: 70c3a6b8-db42-4684-b76b-126f90f1e712
	I0223 14:23:09.118962   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:09.118970   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:09.118978   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:09.118986   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:09.118995   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:09 GMT
	I0223 14:23:09.119081   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:09.119441   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:09.119451   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:09.119459   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:09.119468   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:09.121680   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:09.121692   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:09.121697   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:09.121702   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:09.121707   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:09.121712   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:09.121717   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:09 GMT
	I0223 14:23:09.121722   20778 round_trippers.go:580]     Audit-Id: e8a8d03c-6e15-468a-ad56-fe6c5117c791
	I0223 14:23:09.121789   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:09.616525   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:09.616542   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:09.616551   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:09.616557   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:09.619434   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:09.619448   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:09.619454   20778 round_trippers.go:580]     Audit-Id: 5edf01ec-ead8-43aa-9f70-0d35e6776027
	I0223 14:23:09.619459   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:09.619464   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:09.619469   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:09.619474   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:09.619479   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:09 GMT
	I0223 14:23:09.619541   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:09.619825   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:09.619832   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:09.619838   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:09.619843   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:09.622600   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:09.622612   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:09.622617   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:09 GMT
	I0223 14:23:09.622623   20778 round_trippers.go:580]     Audit-Id: f5b8c9d1-8582-43b6-b869-264e811e523c
	I0223 14:23:09.622628   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:09.622633   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:09.622638   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:09.622643   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:09.622709   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:10.116458   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:10.116473   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:10.116482   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:10.116487   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:10.119325   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:10.119341   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:10.119347   20778 round_trippers.go:580]     Audit-Id: d780daa1-7868-45ae-b119-5f3e7cf50343
	I0223 14:23:10.119354   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:10.119362   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:10.119369   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:10.119375   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:10.119379   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:10 GMT
	I0223 14:23:10.119452   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:10.119796   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:10.119804   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:10.119813   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:10.119824   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:10.165977   20778 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0223 14:23:10.166000   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:10.166013   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:10.166025   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:10 GMT
	I0223 14:23:10.166037   20778 round_trippers.go:580]     Audit-Id: 2874327a-f156-4571-9f37-eb70f00579a0
	I0223 14:23:10.166049   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:10.166064   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:10.166072   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:10.166622   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:10.166909   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:10.616523   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:10.616544   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:10.616555   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:10.616564   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:10.619353   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:10.619371   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:10.619378   20778 round_trippers.go:580]     Audit-Id: 65dd7960-ac84-40a8-88f9-1d60afa0ba00
	I0223 14:23:10.619383   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:10.619388   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:10.619392   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:10.619402   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:10.619408   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:10 GMT
	I0223 14:23:10.619482   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:10.619773   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:10.619780   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:10.619786   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:10.619791   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:10.622062   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:10.622074   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:10.622080   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:10.622085   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:10.622093   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:10.622099   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:10.622104   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:10 GMT
	I0223 14:23:10.622116   20778 round_trippers.go:580]     Audit-Id: e4101d3a-621c-47da-be72-77083957d3a0
	I0223 14:23:10.622182   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:11.116630   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:11.116645   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:11.116651   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:11.116657   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:11.119186   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:11.119200   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:11.119206   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:11.119224   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:11 GMT
	I0223 14:23:11.119233   20778 round_trippers.go:580]     Audit-Id: 13abcccd-fb85-43fd-b87e-c37ad2f3438e
	I0223 14:23:11.119239   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:11.119244   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:11.119249   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:11.119316   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:11.119629   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:11.119636   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:11.119642   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:11.119647   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:11.122137   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:11.122147   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:11.122153   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:11.122158   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:11.122513   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:11.122622   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:11.122646   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:11 GMT
	I0223 14:23:11.122661   20778 round_trippers.go:580]     Audit-Id: fd677e5c-b452-4cd5-ae1c-4e68e5800f26
	I0223 14:23:11.122862   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:11.616364   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:11.616387   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:11.616440   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:11.616446   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:11.618929   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:11.618942   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:11.618949   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:11 GMT
	I0223 14:23:11.618955   20778 round_trippers.go:580]     Audit-Id: 0b2270e7-7676-4266-9714-b00b780bc78e
	I0223 14:23:11.618962   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:11.618967   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:11.618971   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:11.618976   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:11.619614   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:11.619896   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:11.619903   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:11.619908   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:11.619914   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:11.622533   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:11.622545   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:11.622551   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:11.622556   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:11.622562   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:11 GMT
	I0223 14:23:11.622567   20778 round_trippers.go:580]     Audit-Id: bf3875ef-669f-4f72-8240-3f6b0e99837c
	I0223 14:23:11.622572   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:11.622577   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:11.622631   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:12.116936   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:12.117020   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:12.117040   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:12.117053   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:12.120560   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:12.120576   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:12.120583   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:12.120591   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:12.120596   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:12 GMT
	I0223 14:23:12.120601   20778 round_trippers.go:580]     Audit-Id: 7f1d8a00-63f2-4857-bccb-0b357984111b
	I0223 14:23:12.120606   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:12.120611   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:12.120688   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"422","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6380 chars]
	I0223 14:23:12.120995   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:12.121002   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:12.121011   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:12.121017   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:12.123432   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:12.123442   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:12.123448   20778 round_trippers.go:580]     Audit-Id: e3d3655d-6172-4022-aace-d9e9f64dfcc2
	I0223 14:23:12.123453   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:12.123472   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:12.123480   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:12.123487   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:12.123492   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:12 GMT
	I0223 14:23:12.123599   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:12.617816   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:12.617844   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:12.617856   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:12.617959   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:12.622252   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:12.622267   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:12.622284   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:12.622293   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:12.622300   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:12.622307   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:12.622314   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:12 GMT
	I0223 14:23:12.622334   20778 round_trippers.go:580]     Audit-Id: 6752de30-b0aa-4111-85c2-b59c7163ef95
	I0223 14:23:12.622386   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"422","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6380 chars]
	I0223 14:23:12.622670   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:12.622677   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:12.622683   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:12.622688   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:12.624742   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:12.624751   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:12.624756   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:12 GMT
	I0223 14:23:12.624761   20778 round_trippers.go:580]     Audit-Id: fe7e2bf0-d664-4f37-be19-9723fb1889de
	I0223 14:23:12.624767   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:12.624773   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:12.624779   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:12.624784   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:12.624832   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:12.625000   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:13.117273   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:13.117287   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.117293   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.117298   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.120073   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.120085   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.120091   20778 round_trippers.go:580]     Audit-Id: cd7766ab-f79c-443f-aa25-e0d0837eb615
	I0223 14:23:13.120095   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.120100   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.120105   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.120110   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.120115   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.120173   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 14:23:13.120441   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.120450   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.120456   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.120462   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.123094   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.123102   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.123107   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.123112   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.123117   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.123122   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.123127   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.123132   20778 round_trippers.go:580]     Audit-Id: 92812056-cccc-4717-a569-e767beaa3385
	I0223 14:23:13.123186   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.123361   20778 pod_ready.go:92] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.123372   20778 pod_ready.go:81] duration metric: took 16.013355024s waiting for pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.123378   20778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-4rfn2" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.123409   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4rfn2
	I0223 14:23:13.123414   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.123419   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.123424   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.125347   20778 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0223 14:23:13.125357   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.125363   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.125368   20778 round_trippers.go:580]     Audit-Id: edb4ec90-4c11-4174-865c-82edf5962970
	I0223 14:23:13.125374   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.125381   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.125387   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.125391   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.125396   20778 round_trippers.go:580]     Content-Length: 216
	I0223 14:23:13.125407   20778 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-4rfn2\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-4rfn2","kind":"pods"},"code":404}
	I0223 14:23:13.125513   20778 pod_ready.go:97] error getting pod "coredns-787d4945fb-4rfn2" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-4rfn2" not found
	I0223 14:23:13.125521   20778 pod_ready.go:81] duration metric: took 2.137024ms waiting for pod "coredns-787d4945fb-4rfn2" in "kube-system" namespace to be "Ready" ...
	E0223 14:23:13.125526   20778 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-4rfn2" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-4rfn2" not found
	I0223 14:23:13.125531   20778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.125559   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/etcd-multinode-359000
	I0223 14:23:13.125564   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.125569   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.125575   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.127741   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.127750   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.127756   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.127761   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.127767   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.127771   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.127777   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.127782   20778 round_trippers.go:580]     Audit-Id: a6643dbc-1e4d-47c2-922e-c591fd2e9585
	I0223 14:23:13.127855   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-359000","namespace":"kube-system","uid":"398e38cc-24ea-4f91-8b62-51681eb997b4","resourceVersion":"295","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"93ed633257d1dccd5f056f259fe5ad92","kubernetes.io/config.mirror":"93ed633257d1dccd5f056f259fe5ad92","kubernetes.io/config.seen":"2023-02-23T22:22:43.384430470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 14:23:13.128073   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.128079   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.128085   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.128090   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.129969   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:13.129979   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.129984   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.129990   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.129996   20778 round_trippers.go:580]     Audit-Id: c252ef2b-0991-418d-b495-f380d2c313b6
	I0223 14:23:13.130001   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.130006   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.130011   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.130066   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.130244   20778 pod_ready.go:92] pod "etcd-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.130249   20778 pod_ready.go:81] duration metric: took 4.713845ms waiting for pod "etcd-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.130256   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.130284   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-359000
	I0223 14:23:13.130288   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.130296   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.130303   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.132448   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.132457   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.132462   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.132467   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.132472   20778 round_trippers.go:580]     Audit-Id: bfa3b467-1501-4dcb-acae-7c8e8a32468f
	I0223 14:23:13.132478   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.132482   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.132488   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.132550   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-359000","namespace":"kube-system","uid":"39b152d9-2735-457b-a3a1-5e7aca7dc8f3","resourceVersion":"264","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"cfb3605b4e0ab2e0442f07f281676240","kubernetes.io/config.mirror":"cfb3605b4e0ab2e0442f07f281676240","kubernetes.io/config.seen":"2023-02-23T22:22:43.384450086Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 14:23:13.132800   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.132805   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.132811   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.132816   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.134907   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.134916   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.134921   20778 round_trippers.go:580]     Audit-Id: 2c875bb2-25c3-4dc0-aef3-6268b7a58989
	I0223 14:23:13.134927   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.134933   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.134938   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.134943   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.134948   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.134994   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.135158   20778 pod_ready.go:92] pod "kube-apiserver-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.135163   20778 pod_ready.go:81] duration metric: took 4.903109ms waiting for pod "kube-apiserver-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.135168   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.135193   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-359000
	I0223 14:23:13.135198   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.135204   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.135209   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.137058   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:13.137067   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.137073   20778 round_trippers.go:580]     Audit-Id: e18cf633-434b-4ddc-9aa8-e86db08f416b
	I0223 14:23:13.137078   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.137084   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.137092   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.137097   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.137102   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.137170   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-359000","namespace":"kube-system","uid":"361170a2-c3b3-4be5-95ca-334b3b892a82","resourceVersion":"267","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2d2ed3414aeb862284d35d22f8aea7e3","kubernetes.io/config.mirror":"2d2ed3414aeb862284d35d22f8aea7e3","kubernetes.io/config.seen":"2023-02-23T22:22:43.384451227Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 14:23:13.137419   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.137425   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.137431   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.137436   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.139685   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.139696   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.139702   20778 round_trippers.go:580]     Audit-Id: cb420e48-6de6-4c76-bb5f-77332cebb38a
	I0223 14:23:13.139707   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.139713   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.139718   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.139723   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.139728   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.139788   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.139981   20778 pod_ready.go:92] pod "kube-controller-manager-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.139987   20778 pod_ready.go:81] duration metric: took 4.814281ms waiting for pod "kube-controller-manager-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.139992   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lkkx4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.140022   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-lkkx4
	I0223 14:23:13.140027   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.140034   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.140041   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.141993   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:13.142002   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.142008   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.142013   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.142018   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.142024   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.142029   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.142035   20778 round_trippers.go:580]     Audit-Id: 101c32ef-b444-44e2-9126-50cdd0b847d5
	I0223 14:23:13.142252   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lkkx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"42230635-8bb5-4f57-b543-5ddbeada143a","resourceVersion":"392","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 14:23:13.317781   20778 request.go:622] Waited for 175.206143ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.317834   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.317844   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.317855   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.317875   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.321915   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:13.321926   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.321931   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.321937   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.321942   20778 round_trippers.go:580]     Audit-Id: ebf93ff4-c742-4fe6-9169-0321f3e6713e
	I0223 14:23:13.321948   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.321952   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.321958   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.322013   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.322200   20778 pod_ready.go:92] pod "kube-proxy-lkkx4" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.322206   20778 pod_ready.go:81] duration metric: took 182.208215ms waiting for pod "kube-proxy-lkkx4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.322211   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.517861   20778 request.go:622] Waited for 195.513883ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-359000
	I0223 14:23:13.517908   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-359000
	I0223 14:23:13.517916   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.517942   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.517955   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.522391   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:13.522412   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.522421   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.522430   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.522437   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.522444   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.522451   20778 round_trippers.go:580]     Audit-Id: 6d7b42ab-d9e7-4560-88e9-28babffd876a
	I0223 14:23:13.522472   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.522527   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-359000","namespace":"kube-system","uid":"525e88fd-a6fc-470a-a99a-6ceede2058e5","resourceVersion":"291","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"68ba80c02e331ad063843d01029c90d4","kubernetes.io/config.mirror":"68ba80c02e331ad063843d01029c90d4","kubernetes.io/config.seen":"2023-02-23T22:22:43.384451945Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 14:23:13.718151   20778 request.go:622] Waited for 195.325489ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.718269   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.718280   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.718291   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.718303   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.722908   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:13.722921   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.722927   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.722932   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.722936   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.722941   20778 round_trippers.go:580]     Audit-Id: 6dd939fa-9119-4f45-a82f-21d4c06a38a8
	I0223 14:23:13.722946   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.722950   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.723012   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.723204   20778 pod_ready.go:92] pod "kube-scheduler-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.723212   20778 pod_ready.go:81] duration metric: took 400.992668ms waiting for pod "kube-scheduler-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.723219   20778 pod_ready.go:38] duration metric: took 16.62070692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 14:23:13.723233   20778 api_server.go:51] waiting for apiserver process to appear ...
	I0223 14:23:13.723289   20778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:23:13.732561   20778 command_runner.go:130] > 2006
	I0223 14:23:13.733202   20778 api_server.go:71] duration metric: took 17.258297848s to wait for apiserver process to appear ...
	I0223 14:23:13.733213   20778 api_server.go:87] waiting for apiserver healthz status ...
	I0223 14:23:13.733224   20778 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58734/healthz ...
	I0223 14:23:13.738817   20778 api_server.go:278] https://127.0.0.1:58734/healthz returned 200:
	ok
	I0223 14:23:13.738855   20778 round_trippers.go:463] GET https://127.0.0.1:58734/version
	I0223 14:23:13.738861   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.738870   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.738876   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.740041   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:13.740052   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.740058   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.740064   20778 round_trippers.go:580]     Content-Length: 263
	I0223 14:23:13.740069   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.740074   20778 round_trippers.go:580]     Audit-Id: 5a08e647-55cb-40c3-83ee-83b9a1a18305
	I0223 14:23:13.740079   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.740084   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.740096   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.740106   20778 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 14:23:13.740148   20778 api_server.go:140] control plane version: v1.26.1
	I0223 14:23:13.740156   20778 api_server.go:130] duration metric: took 6.939102ms to wait for apiserver health ...
	I0223 14:23:13.740160   20778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 14:23:13.917857   20778 request.go:622] Waited for 177.652884ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:13.917997   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:13.918009   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.918025   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.918036   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.922221   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:13.922236   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.922244   20778 round_trippers.go:580]     Audit-Id: 46cf40f5-a212-4f6a-9544-db09a5453ef2
	I0223 14:23:13.922251   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.922258   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.922264   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.922283   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.922295   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.923644   20778 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 14:23:13.924906   20778 system_pods.go:59] 8 kube-system pods found
	I0223 14:23:13.924916   20778 system_pods.go:61] "coredns-787d4945fb-4hj2n" [034c3c0c-5eec-4b91-9daf-1317dc6af725] Running
	I0223 14:23:13.924920   20778 system_pods.go:61] "etcd-multinode-359000" [398e38cc-24ea-4f91-8b62-51681eb997b4] Running
	I0223 14:23:13.924926   20778 system_pods.go:61] "kindnet-8hs9x" [89d966b4-fbe8-4c74-83f5-ae4a97ceebc0] Running
	I0223 14:23:13.924931   20778 system_pods.go:61] "kube-apiserver-multinode-359000" [39b152d9-2735-457b-a3a1-5e7aca7dc8f3] Running
	I0223 14:23:13.924934   20778 system_pods.go:61] "kube-controller-manager-multinode-359000" [361170a2-c3b3-4be5-95ca-334b3b892a82] Running
	I0223 14:23:13.924939   20778 system_pods.go:61] "kube-proxy-lkkx4" [42230635-8bb5-4f57-b543-5ddbeada143a] Running
	I0223 14:23:13.924942   20778 system_pods.go:61] "kube-scheduler-multinode-359000" [525e88fd-a6fc-470a-a99a-6ceede2058e5] Running
	I0223 14:23:13.924947   20778 system_pods.go:61] "storage-provisioner" [8f927b9f-d9b7-4b15-9905-e816d50c40bc] Running
	I0223 14:23:13.924952   20778 system_pods.go:74] duration metric: took 184.786418ms to wait for pod list to return data ...
	I0223 14:23:13.924958   20778 default_sa.go:34] waiting for default service account to be created ...
	I0223 14:23:14.119164   20778 request.go:622] Waited for 194.162057ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/default/serviceaccounts
	I0223 14:23:14.119214   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/default/serviceaccounts
	I0223 14:23:14.119223   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:14.119235   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:14.119249   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:14.123296   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:14.123313   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:14.123321   20778 round_trippers.go:580]     Content-Length: 261
	I0223 14:23:14.123329   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:14 GMT
	I0223 14:23:14.123337   20778 round_trippers.go:580]     Audit-Id: c7434e29-d38e-4305-89aa-ba01c2e3b085
	I0223 14:23:14.123346   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:14.123356   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:14.123364   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:14.123371   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:14.123385   20778 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"42d0e8e3-00f2-4fab-8d31-6ec487897d7d","resourceVersion":"330","creationTimestamp":"2023-02-23T22:22:55Z"}}]}
	I0223 14:23:14.123505   20778 default_sa.go:45] found service account: "default"
	I0223 14:23:14.123512   20778 default_sa.go:55] duration metric: took 198.547913ms for default service account to be created ...
	I0223 14:23:14.123519   20778 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 14:23:14.317628   20778 request.go:622] Waited for 193.923439ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:14.317693   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:14.317703   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:14.317720   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:14.317731   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:14.323007   20778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 14:23:14.323020   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:14.323026   20778 round_trippers.go:580]     Audit-Id: b2d88587-a358-4bfd-a6df-b5403cb46da4
	I0223 14:23:14.323031   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:14.323036   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:14.323049   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:14.323055   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:14.323060   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:14 GMT
	I0223 14:23:14.323419   20778 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 14:23:14.324675   20778 system_pods.go:86] 8 kube-system pods found
	I0223 14:23:14.324684   20778 system_pods.go:89] "coredns-787d4945fb-4hj2n" [034c3c0c-5eec-4b91-9daf-1317dc6af725] Running
	I0223 14:23:14.324688   20778 system_pods.go:89] "etcd-multinode-359000" [398e38cc-24ea-4f91-8b62-51681eb997b4] Running
	I0223 14:23:14.324692   20778 system_pods.go:89] "kindnet-8hs9x" [89d966b4-fbe8-4c74-83f5-ae4a97ceebc0] Running
	I0223 14:23:14.324696   20778 system_pods.go:89] "kube-apiserver-multinode-359000" [39b152d9-2735-457b-a3a1-5e7aca7dc8f3] Running
	I0223 14:23:14.324700   20778 system_pods.go:89] "kube-controller-manager-multinode-359000" [361170a2-c3b3-4be5-95ca-334b3b892a82] Running
	I0223 14:23:14.324704   20778 system_pods.go:89] "kube-proxy-lkkx4" [42230635-8bb5-4f57-b543-5ddbeada143a] Running
	I0223 14:23:14.324708   20778 system_pods.go:89] "kube-scheduler-multinode-359000" [525e88fd-a6fc-470a-a99a-6ceede2058e5] Running
	I0223 14:23:14.324711   20778 system_pods.go:89] "storage-provisioner" [8f927b9f-d9b7-4b15-9905-e816d50c40bc] Running
	I0223 14:23:14.324716   20778 system_pods.go:126] duration metric: took 201.192484ms to wait for k8s-apps to be running ...
	I0223 14:23:14.324722   20778 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 14:23:14.324779   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:23:14.334751   20778 system_svc.go:56] duration metric: took 10.024824ms WaitForService to wait for kubelet.
	I0223 14:23:14.334763   20778 kubeadm.go:578] duration metric: took 17.859856382s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 14:23:14.334775   20778 node_conditions.go:102] verifying NodePressure condition ...
	I0223 14:23:14.517411   20778 request.go:622] Waited for 182.47484ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes
	I0223 14:23:14.517484   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes
	I0223 14:23:14.517495   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:14.517507   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:14.517520   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:14.521115   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:14.521126   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:14.521131   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:14.521136   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:14.521141   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:14.521146   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:14.521151   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:14 GMT
	I0223 14:23:14.521156   20778 round_trippers.go:580]     Audit-Id: 2b3d52bd-0fd2-4024-9519-3bd516a2549c
	I0223 14:23:14.521223   20778 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5005 chars]
	I0223 14:23:14.521451   20778 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 14:23:14.521463   20778 node_conditions.go:123] node cpu capacity is 6
	I0223 14:23:14.521474   20778 node_conditions.go:105] duration metric: took 186.695174ms to run NodePressure ...
	I0223 14:23:14.521484   20778 start.go:228] waiting for startup goroutines ...
	I0223 14:23:14.521490   20778 start.go:233] waiting for cluster config update ...
	I0223 14:23:14.521499   20778 start.go:242] writing updated cluster config ...
	I0223 14:23:14.543498   20778 out.go:177] 
	I0223 14:23:14.565360   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:23:14.565473   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:23:14.588078   20778 out.go:177] * Starting worker node multinode-359000-m02 in cluster multinode-359000
	I0223 14:23:14.630002   20778 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:23:14.651091   20778 out.go:177] * Pulling base image ...
	I0223 14:23:14.692910   20778 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:23:14.692895   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:23:14.692965   20778 cache.go:57] Caching tarball of preloaded images
	I0223 14:23:14.693163   20778 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 14:23:14.693187   20778 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 14:23:14.693314   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:23:14.749919   20778 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:23:14.749941   20778 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:23:14.749960   20778 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:23:14.749991   20778 start.go:364] acquiring machines lock for multinode-359000-m02: {Name:mk57942f9b35fbc6d6218dbab8bb92a2c747748c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:23:14.750150   20778 start.go:368] acquired machines lock for "multinode-359000-m02" in 147.868µs
	I0223 14:23:14.750175   20778 start.go:93] Provisioning new machine with config: &{Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 14:23:14.750235   20778 start.go:125] createHost starting for "m02" (driver="docker")
	I0223 14:23:14.771991   20778 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 14:23:14.772252   20778 start.go:159] libmachine.API.Create for "multinode-359000" (driver="docker")
	I0223 14:23:14.772294   20778 client.go:168] LocalClient.Create starting
	I0223 14:23:14.772503   20778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem
	I0223 14:23:14.772606   20778 main.go:141] libmachine: Decoding PEM data...
	I0223 14:23:14.772635   20778 main.go:141] libmachine: Parsing certificate...
	I0223 14:23:14.772746   20778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem
	I0223 14:23:14.772812   20778 main.go:141] libmachine: Decoding PEM data...
	I0223 14:23:14.772838   20778 main.go:141] libmachine: Parsing certificate...
	I0223 14:23:14.794209   20778 cli_runner.go:164] Run: docker network inspect multinode-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 14:23:14.852419   20778 network_create.go:76] Found existing network {name:multinode-359000 subnet:0xc000f13b00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0223 14:23:14.852464   20778 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-359000-m02" container
	I0223 14:23:14.852591   20778 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 14:23:14.908120   20778 cli_runner.go:164] Run: docker volume create multinode-359000-m02 --label name.minikube.sigs.k8s.io=multinode-359000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0223 14:23:14.963389   20778 oci.go:103] Successfully created a docker volume multinode-359000-m02
	I0223 14:23:14.963524   20778 cli_runner.go:164] Run: docker run --rm --name multinode-359000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-359000-m02 --entrypoint /usr/bin/test -v multinode-359000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 14:23:15.400865   20778 oci.go:107] Successfully prepared a docker volume multinode-359000-m02
	I0223 14:23:15.400905   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:23:15.400918   20778 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 14:23:15.401047   20778 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-359000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 14:23:21.697910   20778 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-359000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.296748074s)
	I0223 14:23:21.697931   20778 kic.go:199] duration metric: took 6.296975 seconds to extract preloaded images to volume
	I0223 14:23:21.698055   20778 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 14:23:21.842765   20778 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-359000-m02 --name multinode-359000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-359000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-359000-m02 --network multinode-359000 --ip 192.168.58.3 --volume multinode-359000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 14:23:22.193728   20778 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Running}}
	I0223 14:23:22.254754   20778 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Status}}
	I0223 14:23:22.320121   20778 cli_runner.go:164] Run: docker exec multinode-359000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0223 14:23:22.436970   20778 oci.go:144] the created container "multinode-359000-m02" has a running status.
	I0223 14:23:22.437001   20778 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa...
	I0223 14:23:22.627292   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 14:23:22.627356   20778 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 14:23:22.731706   20778 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Status}}
	I0223 14:23:22.788009   20778 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 14:23:22.788029   20778 kic_runner.go:114] Args: [docker exec --privileged multinode-359000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 14:23:22.896020   20778 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Status}}
	I0223 14:23:22.952931   20778 machine.go:88] provisioning docker machine ...
	I0223 14:23:22.952963   20778 ubuntu.go:169] provisioning hostname "multinode-359000-m02"
	I0223 14:23:22.953077   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:23.041907   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:23.042298   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:23.042308   20778 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-359000-m02 && echo "multinode-359000-m02" | sudo tee /etc/hostname
	I0223 14:23:23.183047   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-359000-m02
	
	I0223 14:23:23.183137   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:23.240966   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:23.241318   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:23.241331   20778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-359000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-359000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-359000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:23:23.375447   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:23:23.375465   20778 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:23:23.375473   20778 ubuntu.go:177] setting up certificates
	I0223 14:23:23.375478   20778 provision.go:83] configureAuth start
	I0223 14:23:23.375560   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000-m02
	I0223 14:23:23.432646   20778 provision.go:138] copyHostCerts
	I0223 14:23:23.432692   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:23:23.432752   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:23:23.432764   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:23:23.432885   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:23:23.433057   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:23:23.433101   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:23:23.433106   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:23:23.433170   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:23:23.433288   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:23:23.433326   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:23:23.433331   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:23:23.433395   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:23:23.433523   20778 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.multinode-359000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-359000-m02]
	I0223 14:23:23.713118   20778 provision.go:172] copyRemoteCerts
	I0223 14:23:23.713177   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:23:23.713229   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:23.771686   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:23.867004   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 14:23:23.867085   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:23:23.884742   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 14:23:23.884832   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 14:23:23.902183   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 14:23:23.902269   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 14:23:23.919452   20778 provision.go:86] duration metric: configureAuth took 543.954001ms
	I0223 14:23:23.919467   20778 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:23:23.919636   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:23:23.919712   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:23.977779   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:23.978130   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:23.978141   20778 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:23:24.110170   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:23:24.110186   20778 ubuntu.go:71] root file system type: overlay
	I0223 14:23:24.110276   20778 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:23:24.110354   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:24.169070   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:24.169434   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:24.169492   20778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:23:24.313183   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:23:24.313276   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:24.371727   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:24.372083   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:24.372098   20778 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:23:24.988788   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:23:24.311424992 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 14:23:24.988811   20778 machine.go:91] provisioned docker machine in 2.03584995s
	I0223 14:23:24.988817   20778 client.go:171] LocalClient.Create took 10.216456615s
	I0223 14:23:24.988835   20778 start.go:167] duration metric: libmachine.API.Create for "multinode-359000" took 10.216529206s
	I0223 14:23:24.988841   20778 start.go:300] post-start starting for "multinode-359000-m02" (driver="docker")
	I0223 14:23:24.988845   20778 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:23:24.988930   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:23:24.988986   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:25.047811   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:25.143051   20778 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:23:25.146589   20778 command_runner.go:130] > NAME="Ubuntu"
	I0223 14:23:25.146598   20778 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 14:23:25.146604   20778 command_runner.go:130] > ID=ubuntu
	I0223 14:23:25.146626   20778 command_runner.go:130] > ID_LIKE=debian
	I0223 14:23:25.146636   20778 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 14:23:25.146641   20778 command_runner.go:130] > VERSION_ID="20.04"
	I0223 14:23:25.146648   20778 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 14:23:25.146653   20778 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 14:23:25.146657   20778 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 14:23:25.146668   20778 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 14:23:25.146672   20778 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 14:23:25.146676   20778 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 14:23:25.146738   20778 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:23:25.146750   20778 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:23:25.146756   20778 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:23:25.146761   20778 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:23:25.146766   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:23:25.146871   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:23:25.147029   20778 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:23:25.147035   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /etc/ssl/certs/152102.pem
	I0223 14:23:25.147207   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:23:25.154400   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:23:25.171775   20778 start.go:303] post-start completed in 182.925693ms
	I0223 14:23:25.172298   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000-m02
	I0223 14:23:25.231050   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:23:25.231476   20778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:23:25.231532   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:25.289247   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:25.381822   20778 command_runner.go:130] > 11%!
	(MISSING)I0223 14:23:25.381911   20778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:23:25.386175   20778 command_runner.go:130] > 50G
	I0223 14:23:25.386476   20778 start.go:128] duration metric: createHost completed in 10.63617443s
	I0223 14:23:25.386487   20778 start.go:83] releasing machines lock for "multinode-359000-m02", held for 10.636270241s
	I0223 14:23:25.386578   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000-m02
	I0223 14:23:25.468040   20778 out.go:177] * Found network options:
	I0223 14:23:25.490081   20778 out.go:177]   - NO_PROXY=192.168.58.2
	W0223 14:23:25.511216   20778 proxy.go:119] fail to check proxy env: Error ip not in block
	W0223 14:23:25.511276   20778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 14:23:25.511426   20778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 14:23:25.511490   20778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 14:23:25.511534   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:25.511620   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:25.573252   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:25.574739   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:25.715181   20778 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 14:23:25.715228   20778 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 14:23:25.715244   20778 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 14:23:25.715250   20778 command_runner.go:130] > Device: 10001bh/1048603d	Inode: 269040      Links: 1
	I0223 14:23:25.715255   20778 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 14:23:25.715263   20778 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 14:23:25.715267   20778 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 14:23:25.715271   20778 command_runner.go:130] > Change: 2023-02-23 21:59:23.933961994 +0000
	I0223 14:23:25.715275   20778 command_runner.go:130] >  Birth: -
	I0223 14:23:25.715367   20778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 14:23:25.735908   20778 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 14:23:25.735988   20778 ssh_runner.go:195] Run: which cri-dockerd
	I0223 14:23:25.739555   20778 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 14:23:25.739774   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 14:23:25.747366   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 14:23:25.759967   20778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 14:23:25.774472   20778 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 14:23:25.774507   20778 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 14:23:25.774516   20778 start.go:485] detecting cgroup driver to use...
	I0223 14:23:25.774530   20778 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:23:25.774612   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:23:25.787192   20778 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 14:23:25.787204   20778 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 14:23:25.787956   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 14:23:25.796437   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:23:25.804928   20778 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:23:25.804991   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:23:25.813830   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:23:25.822755   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:23:25.831428   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:23:25.839842   20778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:23:25.847596   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:23:25.855973   20778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:23:25.862383   20778 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 14:23:25.862951   20778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:23:25.870200   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:23:25.938701   20778 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:23:26.013140   20778 start.go:485] detecting cgroup driver to use...
	I0223 14:23:26.013160   20778 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:23:26.013226   20778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:23:26.022626   20778 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 14:23:26.022719   20778 command_runner.go:130] > [Unit]
	I0223 14:23:26.022729   20778 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 14:23:26.022734   20778 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 14:23:26.022738   20778 command_runner.go:130] > BindsTo=containerd.service
	I0223 14:23:26.022743   20778 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 14:23:26.022746   20778 command_runner.go:130] > Wants=network-online.target
	I0223 14:23:26.022750   20778 command_runner.go:130] > Requires=docker.socket
	I0223 14:23:26.022755   20778 command_runner.go:130] > StartLimitBurst=3
	I0223 14:23:26.022759   20778 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 14:23:26.022766   20778 command_runner.go:130] > [Service]
	I0223 14:23:26.022771   20778 command_runner.go:130] > Type=notify
	I0223 14:23:26.022774   20778 command_runner.go:130] > Restart=on-failure
	I0223 14:23:26.022778   20778 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0223 14:23:26.022783   20778 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 14:23:26.022792   20778 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 14:23:26.022797   20778 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 14:23:26.022802   20778 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 14:23:26.022808   20778 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 14:23:26.022815   20778 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 14:23:26.022820   20778 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 14:23:26.022834   20778 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 14:23:26.022841   20778 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 14:23:26.022844   20778 command_runner.go:130] > ExecStart=
	I0223 14:23:26.022862   20778 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 14:23:26.022867   20778 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 14:23:26.022872   20778 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 14:23:26.022878   20778 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 14:23:26.022883   20778 command_runner.go:130] > LimitNOFILE=infinity
	I0223 14:23:26.022887   20778 command_runner.go:130] > LimitNPROC=infinity
	I0223 14:23:26.022890   20778 command_runner.go:130] > LimitCORE=infinity
	I0223 14:23:26.022895   20778 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 14:23:26.022899   20778 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 14:23:26.022904   20778 command_runner.go:130] > TasksMax=infinity
	I0223 14:23:26.022908   20778 command_runner.go:130] > TimeoutStartSec=0
	I0223 14:23:26.022913   20778 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 14:23:26.022916   20778 command_runner.go:130] > Delegate=yes
	I0223 14:23:26.022925   20778 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 14:23:26.022929   20778 command_runner.go:130] > KillMode=process
	I0223 14:23:26.022932   20778 command_runner.go:130] > [Install]
	I0223 14:23:26.022936   20778 command_runner.go:130] > WantedBy=multi-user.target
	I0223 14:23:26.023526   20778 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:23:26.023608   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:23:26.033809   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:23:26.047197   20778 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 14:23:26.047211   20778 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 14:23:26.048065   20778 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:23:26.126213   20778 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:23:26.204125   20778 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:23:26.204143   20778 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:23:26.218985   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:23:26.308879   20778 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:23:26.534777   20778 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:23:26.609689   20778 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 14:23:26.609769   20778 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 14:23:26.676857   20778 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:23:26.748140   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:23:26.824836   20778 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 14:23:26.844166   20778 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 14:23:26.844261   20778 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 14:23:26.848292   20778 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 14:23:26.848303   20778 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 14:23:26.848310   20778 command_runner.go:130] > Device: 100023h/1048611d	Inode: 206         Links: 1
	I0223 14:23:26.848318   20778 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 14:23:26.848325   20778 command_runner.go:130] > Access: 2023-02-23 22:23:26.832424968 +0000
	I0223 14:23:26.848330   20778 command_runner.go:130] > Modify: 2023-02-23 22:23:26.832424968 +0000
	I0223 14:23:26.848336   20778 command_runner.go:130] > Change: 2023-02-23 22:23:26.841424968 +0000
	I0223 14:23:26.848341   20778 command_runner.go:130] >  Birth: -
	I0223 14:23:26.848431   20778 start.go:553] Will wait 60s for crictl version
	I0223 14:23:26.848473   20778 ssh_runner.go:195] Run: which crictl
	I0223 14:23:26.852030   20778 command_runner.go:130] > /usr/bin/crictl
	I0223 14:23:26.852193   20778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 14:23:26.949296   20778 command_runner.go:130] > Version:  0.1.0
	I0223 14:23:26.949309   20778 command_runner.go:130] > RuntimeName:  docker
	I0223 14:23:26.949314   20778 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 14:23:26.949319   20778 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 14:23:26.951247   20778 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 14:23:26.951322   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:23:26.973821   20778 command_runner.go:130] > 23.0.1
	I0223 14:23:26.975402   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:23:26.998283   20778 command_runner.go:130] > 23.0.1
	I0223 14:23:27.019920   20778 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 14:23:27.062307   20778 out.go:177]   - env NO_PROXY=192.168.58.2
	I0223 14:23:27.083315   20778 cli_runner.go:164] Run: docker exec -t multinode-359000-m02 dig +short host.docker.internal
	I0223 14:23:27.195252   20778 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:23:27.195375   20778 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:23:27.199966   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:23:27.209948   20778 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000 for IP: 192.168.58.3
	I0223 14:23:27.209967   20778 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:23:27.210144   20778 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:23:27.210194   20778 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:23:27.210204   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 14:23:27.210226   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 14:23:27.210245   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 14:23:27.210265   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 14:23:27.210357   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:23:27.210403   20778 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:23:27.210414   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:23:27.210448   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:23:27.210482   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:23:27.210511   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:23:27.210592   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:23:27.210629   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem -> /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.210652   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.210671   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.210971   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:23:27.228280   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:23:27.245504   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:23:27.262700   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:23:27.279700   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:23:27.296866   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:23:27.314057   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:23:27.331575   20778 ssh_runner.go:195] Run: openssl version
	I0223 14:23:27.336711   20778 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 14:23:27.337121   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:23:27.345212   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.349341   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.349371   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.349417   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.354392   20778 command_runner.go:130] > 51391683
	I0223 14:23:27.354832   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:23:27.362893   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:23:27.370973   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.374765   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.374889   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.374938   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.379979   20778 command_runner.go:130] > 3ec20f2e
	I0223 14:23:27.380314   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:23:27.388493   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:23:27.396503   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.400542   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.400616   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.400666   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.405790   20778 command_runner.go:130] > b5213941
	I0223 14:23:27.406154   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:23:27.414200   20778 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:23:27.438795   20778 command_runner.go:130] > cgroupfs
	I0223 14:23:27.440485   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:23:27.440496   20778 cni.go:136] 2 nodes found, recommending kindnet
	I0223 14:23:27.440504   20778 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:23:27.440521   20778 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-359000 NodeName:multinode-359000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:23:27.440620   20778 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-359000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:23:27.440676   20778 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-359000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:23:27.440746   20778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 14:23:27.447829   20778 command_runner.go:130] > kubeadm
	I0223 14:23:27.447838   20778 command_runner.go:130] > kubectl
	I0223 14:23:27.447842   20778 command_runner.go:130] > kubelet
	I0223 14:23:27.448412   20778 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:23:27.448464   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0223 14:23:27.455798   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0223 14:23:27.468519   20778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 14:23:27.482122   20778 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:23:27.486337   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:23:27.496415   20778 host.go:66] Checking if "multinode-359000" exists ...
	I0223 14:23:27.496589   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:23:27.496614   20778 start.go:301] JoinCluster: &{Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:23:27.496674   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0223 14:23:27.496757   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:23:27.555298   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:23:27.710569   20778 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mjr27n.4th1hcvqu294bu63 --discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 
	I0223 14:23:27.714935   20778 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 14:23:27.714965   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mjr27n.4th1hcvqu294bu63 --discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-359000-m02"
	I0223 14:23:27.757281   20778 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 14:23:27.870687   20778 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0223 14:23:27.870710   20778 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0223 14:23:27.895584   20778 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:23:27.895597   20778 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:23:27.895602   20778 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 14:23:27.963514   20778 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0223 14:23:29.479472   20778 command_runner.go:130] > This node has joined the cluster:
	I0223 14:23:29.479491   20778 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0223 14:23:29.479499   20778 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0223 14:23:29.479507   20778 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0223 14:23:29.482891   20778 command_runner.go:130] ! W0223 22:23:27.756576    1231 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 14:23:29.482909   20778 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 14:23:29.482919   20778 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:23:29.482936   20778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mjr27n.4th1hcvqu294bu63 --discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-359000-m02": (1.767949449s)
	I0223 14:23:29.482953   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0223 14:23:29.613767   20778 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0223 14:23:29.613791   20778 start.go:303] JoinCluster complete in 2.11716459s
	I0223 14:23:29.613799   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:23:29.613804   20778 cni.go:136] 2 nodes found, recommending kindnet
	I0223 14:23:29.613899   20778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 14:23:29.618002   20778 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 14:23:29.618017   20778 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 14:23:29.618029   20778 command_runner.go:130] > Device: a6h/166d	Inode: 267127      Links: 1
	I0223 14:23:29.618037   20778 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 14:23:29.618058   20778 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 14:23:29.618066   20778 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 14:23:29.618074   20778 command_runner.go:130] > Change: 2023-02-23 21:59:23.284856714 +0000
	I0223 14:23:29.618079   20778 command_runner.go:130] >  Birth: -
	I0223 14:23:29.618120   20778 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 14:23:29.618127   20778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 14:23:29.631466   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 14:23:29.819916   20778 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 14:23:29.822219   20778 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 14:23:29.824049   20778 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 14:23:29.832566   20778 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 14:23:29.839355   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:23:29.839560   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:23:29.839848   20778 round_trippers.go:463] GET https://127.0.0.1:58734/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 14:23:29.839855   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.839861   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.839867   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.842392   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.842403   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.842408   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.842414   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.842420   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.842425   20778 round_trippers.go:580]     Content-Length: 291
	I0223 14:23:29.842430   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.842436   20778 round_trippers.go:580]     Audit-Id: 95e67a2e-cb37-46e9-99dd-be393e303326
	I0223 14:23:29.842442   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.842454   20778 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"430","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 14:23:29.842497   20778 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-359000" context rescaled to 1 replicas
	I0223 14:23:29.842511   20778 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 14:23:29.864794   20778 out.go:177] * Verifying Kubernetes components...
	I0223 14:23:29.907748   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:23:29.918575   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:23:29.977259   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:23:29.977505   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:23:29.977725   20778 node_ready.go:35] waiting up to 6m0s for node "multinode-359000-m02" to be "Ready" ...
	I0223 14:23:29.977763   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:29.977771   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.977782   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.977788   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.979794   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:29.979811   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.979817   20778 round_trippers.go:580]     Audit-Id: 9a05ea0c-5b0e-493f-9c3c-418719e966a9
	I0223 14:23:29.979822   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.979828   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.979832   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.979838   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.979843   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.979928   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:29.980137   20778 node_ready.go:49] node "multinode-359000-m02" has status "Ready":"True"
	I0223 14:23:29.980142   20778 node_ready.go:38] duration metric: took 2.40989ms waiting for node "multinode-359000-m02" to be "Ready" ...
	I0223 14:23:29.980148   20778 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 14:23:29.980192   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:29.980197   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.980203   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.980210   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.983658   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:29.983673   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.983679   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.983686   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.983693   20778 round_trippers.go:580]     Audit-Id: 1bcd57e4-8c7f-4ac6-9286-83805f0611b1
	I0223 14:23:29.983699   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.983706   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.983712   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.984986   20778 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"476"},"items":[{"metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0223 14:23:29.986650   20778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.986693   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:29.986698   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.986704   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.986711   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.989233   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.989245   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.989251   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.989258   20778 round_trippers.go:580]     Audit-Id: 96ac6135-ac09-4aac-8975-079fb2277c99
	I0223 14:23:29.989266   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.989271   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.989276   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.989283   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.989353   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 14:23:29.989616   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:29.989623   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.989628   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.989634   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.991512   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:29.991521   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.991530   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.991535   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.991540   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.991544   20778 round_trippers.go:580]     Audit-Id: a9cfa654-b2fc-4223-9d0e-b2d55126cfd9
	I0223 14:23:29.991549   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.991554   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.991758   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:29.991950   20778 pod_ready.go:92] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:29.991956   20778 pod_ready.go:81] duration metric: took 5.29629ms waiting for pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.991962   20778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.991992   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/etcd-multinode-359000
	I0223 14:23:29.991998   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.992005   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.992013   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.994032   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.994041   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.994047   20778 round_trippers.go:580]     Audit-Id: 6acfb0eb-70ae-4e2f-b912-c01a0f079d36
	I0223 14:23:29.994054   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.994061   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.994066   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.994072   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.994076   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.994125   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-359000","namespace":"kube-system","uid":"398e38cc-24ea-4f91-8b62-51681eb997b4","resourceVersion":"295","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"93ed633257d1dccd5f056f259fe5ad92","kubernetes.io/config.mirror":"93ed633257d1dccd5f056f259fe5ad92","kubernetes.io/config.seen":"2023-02-23T22:22:43.384430470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 14:23:29.994334   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:29.994340   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.994346   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.994351   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.996547   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.996555   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.996561   20778 round_trippers.go:580]     Audit-Id: 41cbc821-e19c-4e3b-a3b2-72679d7d825d
	I0223 14:23:29.996566   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.996571   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.996576   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.996581   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.996586   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.996645   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:29.996835   20778 pod_ready.go:92] pod "etcd-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:29.996841   20778 pod_ready.go:81] duration metric: took 4.873738ms waiting for pod "etcd-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.996849   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.996883   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-359000
	I0223 14:23:29.996888   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.996895   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.996901   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.999183   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.999192   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.999198   20778 round_trippers.go:580]     Audit-Id: 1901105c-4c13-49d4-b7ae-80faed6b3c19
	I0223 14:23:29.999207   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.999213   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.999217   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.999222   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.999227   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.999298   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-359000","namespace":"kube-system","uid":"39b152d9-2735-457b-a3a1-5e7aca7dc8f3","resourceVersion":"264","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"cfb3605b4e0ab2e0442f07f281676240","kubernetes.io/config.mirror":"cfb3605b4e0ab2e0442f07f281676240","kubernetes.io/config.seen":"2023-02-23T22:22:43.384450086Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 14:23:29.999552   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:29.999559   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.999567   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.999576   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.001694   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:30.001705   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.001711   20778 round_trippers.go:580]     Audit-Id: 09cec0df-fd0d-4296-9053-360d90ff3633
	I0223 14:23:30.001715   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.001720   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.001730   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.001738   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.001745   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.002609   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:30.003123   20778 pod_ready.go:92] pod "kube-apiserver-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:30.003134   20778 pod_ready.go:81] duration metric: took 6.27815ms waiting for pod "kube-apiserver-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.003143   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.003355   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-359000
	I0223 14:23:30.003373   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.003381   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.003412   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.006235   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:30.006246   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.006252   20778 round_trippers.go:580]     Audit-Id: 4842ebf2-e87b-4db3-911e-87d128a8857c
	I0223 14:23:30.006257   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.006262   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.006268   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.006273   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.006278   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.006354   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-359000","namespace":"kube-system","uid":"361170a2-c3b3-4be5-95ca-334b3b892a82","resourceVersion":"267","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2d2ed3414aeb862284d35d22f8aea7e3","kubernetes.io/config.mirror":"2d2ed3414aeb862284d35d22f8aea7e3","kubernetes.io/config.seen":"2023-02-23T22:22:43.384451227Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 14:23:30.006633   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:30.006639   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.006645   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.006650   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.008622   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:30.008634   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.008639   20778 round_trippers.go:580]     Audit-Id: 1849d8be-57f6-4622-9b96-6136a11c0540
	I0223 14:23:30.008645   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.008650   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.008655   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.008660   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.008665   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.008754   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:30.008939   20778 pod_ready.go:92] pod "kube-controller-manager-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:30.008945   20778 pod_ready.go:81] duration metric: took 5.79652ms waiting for pod "kube-controller-manager-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.008951   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lkkx4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.178098   20778 request.go:622] Waited for 169.095013ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-lkkx4
	I0223 14:23:30.178153   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-lkkx4
	I0223 14:23:30.178163   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.178175   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.178190   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.181894   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:30.181908   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.181914   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.181919   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.181927   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.181933   20778 round_trippers.go:580]     Audit-Id: 0a729be5-01b9-4203-bd78-6647b3bf1e46
	I0223 14:23:30.181939   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.181943   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.182014   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lkkx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"42230635-8bb5-4f57-b543-5ddbeada143a","resourceVersion":"392","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 14:23:30.377978   20778 request.go:622] Waited for 195.675934ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:30.378032   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:30.378122   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.378136   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.378154   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.381216   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:30.381227   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.381233   20778 round_trippers.go:580]     Audit-Id: 891ee167-80f5-4a4b-a2f6-685bd2308e0c
	I0223 14:23:30.381238   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.381245   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.381250   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.381255   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.381260   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.381514   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:30.381713   20778 pod_ready.go:92] pod "kube-proxy-lkkx4" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:30.381721   20778 pod_ready.go:81] duration metric: took 372.763127ms waiting for pod "kube-proxy-lkkx4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.381727   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-slmv4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.577912   20778 request.go:622] Waited for 196.14555ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:30.577950   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:30.577957   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.577966   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.577996   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.580758   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:30.580778   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.580786   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.580796   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.580805   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.580812   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.580822   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.580827   20778 round_trippers.go:580]     Audit-Id: ae135953-cd24-4539-9c1d-cbfcc47bba10
	I0223 14:23:30.580894   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"465","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 14:23:30.778151   20778 request.go:622] Waited for 196.986376ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:30.778264   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:30.778274   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.778286   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.778296   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.781546   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:30.781559   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.781568   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.781579   20778 round_trippers.go:580]     Audit-Id: 2df2455f-c8d7-461b-b5d9-912853d06bb3
	I0223 14:23:30.781587   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.781592   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.781605   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.781614   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.781824   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:31.283411   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:31.283438   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:31.283450   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:31.283460   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:31.287837   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:31.287855   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:31.287864   20778 round_trippers.go:580]     Audit-Id: 050f56b6-0ef9-4020-ac2b-d1bf327f0a51
	I0223 14:23:31.287870   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:31.287877   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:31.287885   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:31.287891   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:31.287898   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:31 GMT
	I0223 14:23:31.287995   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:31.288247   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:31.288254   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:31.288259   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:31.288265   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:31.290435   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:31.290445   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:31.290453   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:31.290459   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:31.290465   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:31.290474   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:31.290480   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:31 GMT
	I0223 14:23:31.290485   20778 round_trippers.go:580]     Audit-Id: 29148376-0e84-4a3b-aee6-d5281f652ec5
	I0223 14:23:31.290535   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:31.783311   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:31.783330   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:31.783339   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:31.783350   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:31.786138   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:31.786152   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:31.786161   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:31.786167   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:31.786172   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:31.786179   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:31 GMT
	I0223 14:23:31.786188   20778 round_trippers.go:580]     Audit-Id: 667f8784-285d-488e-b058-5399790c6f9a
	I0223 14:23:31.786199   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:31.786374   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:31.786621   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:31.786629   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:31.786637   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:31.786642   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:31.788953   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:31.788963   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:31.788969   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:31.788976   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:31.788982   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:31 GMT
	I0223 14:23:31.788987   20778 round_trippers.go:580]     Audit-Id: db2c3769-f20a-4065-9e74-1fccc8d56bd4
	I0223 14:23:31.788992   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:31.788997   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:31.789048   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:32.283396   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:32.283421   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:32.283434   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:32.283444   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:32.287690   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:32.287702   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:32.287707   20778 round_trippers.go:580]     Audit-Id: a6d330ae-51bb-4416-b24b-7fbb9169726a
	I0223 14:23:32.287712   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:32.287717   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:32.287722   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:32.287727   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:32.287735   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:32 GMT
	I0223 14:23:32.287786   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:32.288046   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:32.288052   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:32.288058   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:32.288063   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:32.290264   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:32.290274   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:32.290279   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:32.290284   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:32 GMT
	I0223 14:23:32.290291   20778 round_trippers.go:580]     Audit-Id: 5f897324-2dcb-42cd-bbb8-902282ee92d1
	I0223 14:23:32.290296   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:32.290301   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:32.290306   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:32.290351   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:32.783398   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:32.783425   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:32.783437   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:32.783447   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:32.787674   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:32.787691   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:32.787699   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:32.787706   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:32.787713   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:32 GMT
	I0223 14:23:32.787720   20778 round_trippers.go:580]     Audit-Id: 8af6f9e1-4b45-4876-ad20-768bb65c7a12
	I0223 14:23:32.787727   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:32.787734   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:32.787829   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:32.788165   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:32.788172   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:32.788179   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:32.788184   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:32.789968   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:32.789982   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:32.789996   20778 round_trippers.go:580]     Audit-Id: acee255e-91aa-4109-821c-bc1564c5b4ff
	I0223 14:23:32.790010   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:32.790022   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:32.790036   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:32.790045   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:32.790057   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:32 GMT
	I0223 14:23:32.790342   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:32.790499   20778 pod_ready.go:102] pod "kube-proxy-slmv4" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:33.283788   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:33.283804   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.283813   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.283818   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.286717   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.286728   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.286734   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.286739   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.286744   20778 round_trippers.go:580]     Audit-Id: f67bd822-d284-4466-9733-dc5838b06f2a
	I0223 14:23:33.286749   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.286754   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.286758   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.287093   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:33.287381   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:33.287389   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.287395   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.287401   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.289861   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.289871   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.289877   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.289882   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.289888   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.289895   20778 round_trippers.go:580]     Audit-Id: dd169bb2-05fe-4d44-909b-8eee8cbe7ad0
	I0223 14:23:33.289901   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.289906   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.289956   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:33.784030   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:33.784058   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.784072   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.784082   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.788522   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:33.788542   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.788550   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.788557   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.788564   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.788580   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.788587   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.788594   20778 round_trippers.go:580]     Audit-Id: 374ee5eb-d529-4485-8877-2f78793b85f7
	I0223 14:23:33.788688   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"488","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 14:23:33.789009   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:33.789015   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.789020   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.789026   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.791461   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.791472   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.791477   20778 round_trippers.go:580]     Audit-Id: a355332e-c436-4fc9-a31f-5a0115c969a0
	I0223 14:23:33.791483   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.791487   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.791494   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.791499   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.791504   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.791543   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:33.791695   20778 pod_ready.go:92] pod "kube-proxy-slmv4" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:33.791705   20778 pod_ready.go:81] duration metric: took 3.409954941s waiting for pod "kube-proxy-slmv4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:33.791711   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:33.791736   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-359000
	I0223 14:23:33.791743   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.791749   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.791754   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.793772   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.793785   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.793797   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.793805   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.793812   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.793819   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.793824   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.793835   20778 round_trippers.go:580]     Audit-Id: 31aa3316-ffe0-456d-b250-c605e11faf04
	I0223 14:23:33.793997   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-359000","namespace":"kube-system","uid":"525e88fd-a6fc-470a-a99a-6ceede2058e5","resourceVersion":"291","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"68ba80c02e331ad063843d01029c90d4","kubernetes.io/config.mirror":"68ba80c02e331ad063843d01029c90d4","kubernetes.io/config.seen":"2023-02-23T22:22:43.384451945Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 14:23:33.794222   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:33.794230   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.794237   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.794245   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.796486   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.796497   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.796502   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.796507   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.796513   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.796522   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.796528   20778 round_trippers.go:580]     Audit-Id: ec0d34c5-2056-4dc8-ad77-faf56577951f
	I0223 14:23:33.796533   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.796586   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:33.796766   20778 pod_ready.go:92] pod "kube-scheduler-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:33.796773   20778 pod_ready.go:81] duration metric: took 5.057538ms waiting for pod "kube-scheduler-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:33.796779   20778 pod_ready.go:38] duration metric: took 3.816603537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 14:23:33.796789   20778 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 14:23:33.796844   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:23:33.806683   20778 system_svc.go:56] duration metric: took 9.890246ms WaitForService to wait for kubelet.
	I0223 14:23:33.806696   20778 kubeadm.go:578] duration metric: took 3.964145372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 14:23:33.806711   20778 node_conditions.go:102] verifying NodePressure condition ...
	I0223 14:23:33.806752   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes
	I0223 14:23:33.806756   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.806762   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.806767   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.809439   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.809452   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.809457   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.809462   20778 round_trippers.go:580]     Audit-Id: dacc17f1-d27c-4eea-a86b-ace3dec29d17
	I0223 14:23:33.809468   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.809476   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.809482   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.809487   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.809583   20778 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"490"},"items":[{"metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10171 chars]
	I0223 14:23:33.809893   20778 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 14:23:33.809902   20778 node_conditions.go:123] node cpu capacity is 6
	I0223 14:23:33.809917   20778 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 14:23:33.809921   20778 node_conditions.go:123] node cpu capacity is 6
	I0223 14:23:33.809925   20778 node_conditions.go:105] duration metric: took 3.210113ms to run NodePressure ...
	I0223 14:23:33.809933   20778 start.go:228] waiting for startup goroutines ...
	I0223 14:23:33.809950   20778 start.go:242] writing updated cluster config ...
	I0223 14:23:33.837902   20778 ssh_runner.go:195] Run: rm -f paused
	I0223 14:23:33.876362   20778 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 14:23:33.897771   20778 out.go:177] * Done! kubectl is now configured to use "multinode-359000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 22:22:26 UTC, end at Thu 2023-02-23 22:23:41 UTC. --
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028508250Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028532946Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028545167Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028595353Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028610510Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028628532Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028672345Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028744747Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028778196Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.029136942Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.029207530Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.029630466Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.037382708Z" level=info msg="Loading containers: start."
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.114607166Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.147532455Z" level=info msg="Loading containers: done."
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.155507972Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.155568140Z" level=info msg="Daemon has completed initialization"
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.176474146Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 22:22:30 multinode-359000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.181922416Z" level=info msg="API listen on [::]:2376"
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.185594222Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 22:23:10 multinode-359000 dockerd[832]: time="2023-02-23T22:23:10.579553651Z" level=info msg="ignoring event" container=3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:23:10 multinode-359000 dockerd[832]: time="2023-02-23T22:23:10.689696488Z" level=info msg="ignoring event" container=5a80257db7ab63e30b492ef9edac46fd01ddfb0cd659ea3cf2edcbaf3aa5dc66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:23:11 multinode-359000 dockerd[832]: time="2023-02-23T22:23:11.415971732Z" level=info msg="ignoring event" container=4c53a971712a250235eb0b9c9e7bc48e5fb9546c37a799b3c8dff6dac6086269 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:23:11 multinode-359000 dockerd[832]: time="2023-02-23T22:23:11.473443628Z" level=info msg="ignoring event" container=adf3b8437f58143117fd90eae76df14cd9c62c0581498bbd9a99420c1b6210cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	1ccbed670c9b8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 seconds ago        Running             busybox                   0                   dc3a99606f354
	0599d5d10e4b8       5185b96f0becf                                                                                         30 seconds ago       Running             coredns                   1                   58498fd30ffac
	1ee4943e67d73       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              42 seconds ago       Running             kindnet-cni               0                   a62fff4127d3a
	9e2ec0b97da56       6e38f40d628db                                                                                         44 seconds ago       Running             storage-provisioner       0                   adfcc9ef8d54d
	4c53a971712a2       5185b96f0becf                                                                                         44 seconds ago       Exited              coredns                   0                   adf3b8437f581
	c5e089ae7a37b       46a6bb3c77ce0                                                                                         45 seconds ago       Running             kube-proxy                0                   fb25162c4acdd
	369b8cd310185       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   7936273c5c142
	dcd9a92734499       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   271fbaa821695
	e3f83b3f55f93       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   0f5c9fa66b403
	a0907a2dfdc08       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   d291a87615ae3
	
	* 
	* ==> coredns [0599d5d10e4b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:35532 - 761 "HINFO IN 94145845304353067.6871346282503012771. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.01453778s
	[INFO] 10.244.0.3:54613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017723s
	[INFO] 10.244.0.3:47317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046373109s
	[INFO] 10.244.0.3:37974 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003413893s
	[INFO] 10.244.0.3:59396 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011871266s
	[INFO] 10.244.0.3:56055 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135247s
	[INFO] 10.244.0.3:53172 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005329762s
	[INFO] 10.244.0.3:36912 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170424s
	[INFO] 10.244.0.3:58427 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152658s
	[INFO] 10.244.0.3:36494 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004240922s
	[INFO] 10.244.0.3:49408 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155638s
	[INFO] 10.244.0.3:58301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129775s
	[INFO] 10.244.0.3:47060 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113379s
	[INFO] 10.244.0.3:34216 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133927s
	[INFO] 10.244.0.3:50405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104549s
	[INFO] 10.244.0.3:39896 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102874s
	[INFO] 10.244.0.3:52326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093339s
	
	* 
	* ==> coredns [4c53a971712a] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 5435270432736386928.6717425237758278781. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 5435270432736386928.6717425237758278781. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-359000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-359000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0
	                    minikube.k8s.io/name=multinode-359000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T14_22_44_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:22:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-359000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:23:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:23:14 +0000   Thu, 23 Feb 2023 22:22:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:23:14 +0000   Thu, 23 Feb 2023 22:22:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:23:14 +0000   Thu, 23 Feb 2023 22:22:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:23:14 +0000   Thu, 23 Feb 2023 22:22:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-359000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    5486766d-d32d-40b6-9600-b780b0c83991
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-ghfsb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-787d4945fb-4hj2n                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     46s
	  kube-system                 etcd-multinode-359000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 kindnet-8hs9x                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      46s
	  kube-system                 kube-apiserver-multinode-359000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-controller-manager-multinode-359000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-proxy-lkkx4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kube-scheduler-multinode-359000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 45s   kube-proxy       
	  Normal  Starting                 58s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  58s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                58s   kubelet          Node multinode-359000 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  58s   kubelet          Node multinode-359000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s   kubelet          Node multinode-359000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s   kubelet          Node multinode-359000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s   node-controller  Node multinode-359000 event: Registered Node multinode-359000 in Controller
	
	
	Name:               multinode-359000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-359000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:23:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-359000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:23:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:23:29 +0000   Thu, 23 Feb 2023 22:23:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:23:29 +0000   Thu, 23 Feb 2023 22:23:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:23:29 +0000   Thu, 23 Feb 2023 22:23:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:23:29 +0000   Thu, 23 Feb 2023 22:23:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-359000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    5486766d-d32d-40b6-9600-b780b0c83991
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-9zw45    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-w7skb               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13s
	  kube-system                 kube-proxy-slmv4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 8s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)  kubelet          Node multinode-359000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)  kubelet          Node multinode-359000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)  kubelet          Node multinode-359000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12s                kubelet          Node multinode-359000-m02 status is now: NodeReady
	  Normal  RegisteredNode           11s                node-controller  Node multinode-359000-m02 event: Registered Node multinode-359000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000066] FS-Cache: O-key=[8] '7136580500000000'
	[  +0.000050] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000051] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=0000000032a0fa48
	[  +0.000163] FS-Cache: N-key=[8] '7136580500000000'
	[  +0.002658] FS-Cache: Duplicate cookie detected
	[  +0.000052] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000050] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=000000008e7781d6
	[  +0.000070] FS-Cache: O-key=[8] '7136580500000000'
	[  +0.000028] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000113] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000004fece264
	[  +0.000061] FS-Cache: N-key=[8] '7136580500000000'
	[Feb23 22:08] FS-Cache: Duplicate cookie detected
	[  +0.000034] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000058] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=000000006ea4f74a
	[  +0.000063] FS-Cache: O-key=[8] '7036580500000000'
	[  +0.000034] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000041] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000000a668217
	[  +0.000066] FS-Cache: N-key=[8] '7036580500000000'
	[  +0.413052] FS-Cache: Duplicate cookie detected
	[  +0.000113] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000056] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=00000000634601b2
	[  +0.000097] FS-Cache: O-key=[8] '7736580500000000'
	[  +0.000045] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000055] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000004fece264
	[  +0.000089] FS-Cache: N-key=[8] '7736580500000000'
	
	* 
	* ==> etcd [dcd9a9273449] <==
	* {"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T22:22:38.987Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-359000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:22:38.987Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:22:38.987Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:22:38.987Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T22:22:38.989Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-23T22:23:20.368Z","caller":"traceutil/trace.go:171","msg":"trace[1549276223] linearizableReadLoop","detail":"{readStateIndex:453; appliedIndex:452; }","duration":"284.250616ms","start":"2023-02-23T22:23:20.084Z","end":"2023-02-23T22:23:20.368Z","steps":["trace[1549276223] 'read index received'  (duration: 284.082521ms)","trace[1549276223] 'applied index is now lower than readState.Index'  (duration: 167.663µs)"],"step_count":2}
	{"level":"info","ts":"2023-02-23T22:23:20.368Z","caller":"traceutil/trace.go:171","msg":"trace[510834250] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"452.213068ms","start":"2023-02-23T22:23:19.916Z","end":"2023-02-23T22:23:20.368Z","steps":["trace[510834250] 'process raft request'  (duration: 451.938904ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-23T22:23:20.368Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"284.522911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-23T22:23:20.368Z","caller":"traceutil/trace.go:171","msg":"trace[2104721247] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:436; }","duration":"284.661043ms","start":"2023-02-23T22:23:20.084Z","end":"2023-02-23T22:23:20.368Z","steps":["trace[2104721247] 'agreement among raft nodes before linearized reading'  (duration: 284.507096ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-23T22:23:20.368Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"278.009091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-23T22:23:20.368Z","caller":"traceutil/trace.go:171","msg":"trace[885391539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:436; }","duration":"278.341795ms","start":"2023-02-23T22:23:20.090Z","end":"2023-02-23T22:23:20.368Z","steps":["trace[885391539] 'agreement among raft nodes before linearized reading'  (duration: 277.992004ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-23T22:23:20.368Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-23T22:23:19.916Z","time spent":"452.248921ms","remote":"127.0.0.1:43488","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:435 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> kernel <==
	*  22:23:42 up  1:52,  0 users,  load average: 1.60, 1.30, 0.76
	Linux multinode-359000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [1ee4943e67d7] <==
	* I0223 22:22:59.665719       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0223 22:22:59.665837       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0223 22:22:59.665971       1 main.go:116] setting mtu 1500 for CNI 
	I0223 22:22:59.665981       1 main.go:146] kindnetd IP family: "ipv4"
	I0223 22:22:59.665997       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0223 22:23:00.364511       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:00.364593       1 main.go:227] handling current node
	I0223 22:23:10.379072       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:10.379113       1 main.go:227] handling current node
	I0223 22:23:20.387293       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:20.387320       1 main.go:227] handling current node
	I0223 22:23:30.390653       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:30.390692       1 main.go:227] handling current node
	I0223 22:23:30.390699       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 22:23:30.390704       1 main.go:250] Node multinode-359000-m02 has CIDR [10.244.1.0/24] 
	I0223 22:23:30.390802       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0223 22:23:40.396312       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:40.396355       1 main.go:227] handling current node
	I0223 22:23:40.396364       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 22:23:40.396368       1 main.go:250] Node multinode-359000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [369b8cd31018] <==
	* I0223 22:22:40.203142       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 22:22:40.203225       1 cache.go:39] Caches are synced for autoregister controller
	I0223 22:22:40.203391       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 22:22:40.203528       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 22:22:40.203588       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 22:22:40.204763       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 22:22:40.204778       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 22:22:40.204790       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 22:22:40.217027       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 22:22:40.926200       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 22:22:41.108587       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0223 22:22:41.111058       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0223 22:22:41.111093       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 22:22:41.586625       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 22:22:41.615595       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 22:22:41.730234       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0223 22:22:41.735113       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0223 22:22:41.735776       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 22:22:41.738959       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 22:22:42.131013       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 22:22:43.274650       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 22:22:43.282271       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0223 22:22:43.289275       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 22:22:55.685737       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0223 22:22:55.735535       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [a0907a2dfdc0] <==
	* I0223 22:22:55.090508       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:22:55.124886       1 shared_informer.go:280] Caches are synced for cronjob
	I0223 22:22:55.130351       1 shared_informer.go:280] Caches are synced for TTL after finished
	I0223 22:22:55.133629       1 shared_informer.go:280] Caches are synced for job
	I0223 22:22:55.187633       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:22:55.571851       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:22:55.583552       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:22:55.583587       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 22:22:55.692009       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8hs9x"
	I0223 22:22:55.693322       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lkkx4"
	I0223 22:22:55.738145       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0223 22:22:55.977601       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0223 22:22:55.988883       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-4rfn2"
	I0223 22:22:55.997144       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-4hj2n"
	I0223 22:22:56.084317       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-4rfn2"
	W0223 22:23:28.781972       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-359000-m02" does not exist
	I0223 22:23:28.787069       1 range_allocator.go:372] Set node multinode-359000-m02 PodCIDR to [10.244.1.0/24]
	I0223 22:23:28.788533       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-slmv4"
	I0223 22:23:28.788847       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w7skb"
	W0223 22:23:29.428543       1 topologycache.go:232] Can't get CPU or zone information for multinode-359000-m02 node
	W0223 22:23:30.068725       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-359000-m02. Assuming now as a timestamp.
	I0223 22:23:30.068981       1 event.go:294] "Event occurred" object="multinode-359000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-359000-m02 event: Registered Node multinode-359000-m02 in Controller"
	I0223 22:23:34.864832       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 22:23:34.910137       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-9zw45"
	I0223 22:23:34.913892       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-ghfsb"
	
	* 
	* ==> kube-proxy [c5e089ae7a37] <==
	* I0223 22:22:56.598618       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0223 22:22:56.598704       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0223 22:22:56.598718       1 server_others.go:535] "Using iptables proxy"
	I0223 22:22:56.691218       1 server_others.go:176] "Using iptables Proxier"
	I0223 22:22:56.691317       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0223 22:22:56.691326       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0223 22:22:56.691341       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0223 22:22:56.691362       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 22:22:56.692039       1 server.go:655] "Version info" version="v1.26.1"
	I0223 22:22:56.692097       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:22:56.692753       1 config.go:317] "Starting service config controller"
	I0223 22:22:56.692788       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 22:22:56.692810       1 config.go:226] "Starting endpoint slice config controller"
	I0223 22:22:56.692813       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 22:22:56.693303       1 config.go:444] "Starting node config controller"
	I0223 22:22:56.693311       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 22:22:56.792946       1 shared_informer.go:280] Caches are synced for service config
	I0223 22:22:56.793027       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 22:22:56.794356       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e3f83b3f55f9] <==
	* W0223 22:22:40.168265       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 22:22:40.168600       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 22:22:40.168616       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 22:22:40.168626       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 22:22:40.168723       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 22:22:40.168751       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 22:22:40.168774       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 22:22:40.168800       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 22:22:40.168966       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0223 22:22:40.169023       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 22:22:40.169218       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 22:22:40.169260       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0223 22:22:40.169918       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 22:22:40.169960       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 22:22:40.169976       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 22:22:40.169989       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 22:22:41.078835       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 22:22:41.078897       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 22:22:41.274491       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 22:22:41.274537       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 22:22:41.388336       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 22:22:41.388380       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 22:22:41.579345       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 22:22:41.579455       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0223 22:22:43.928827       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 22:22:26 UTC, end at Thu 2023-02-23 22:23:42 UTC. --
	Feb 23 22:22:57 multinode-359000 kubelet[2137]: I0223 22:22:57.098200    2137 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8f927b9f-d9b7-4b15-9905-e816d50c40bc-tmp\") pod \"storage-provisioner\" (UID: \"8f927b9f-d9b7-4b15-9905-e816d50c40bc\") " pod="kube-system/storage-provisioner"
	Feb 23 22:22:58 multinode-359000 kubelet[2137]: I0223 22:22:58.228086    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-4hj2n" podStartSLOduration=3.228059507 pod.CreationTimestamp="2023-02-23 22:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:57.827271839 +0000 UTC m=+14.569256017" watchObservedRunningTime="2023-02-23 22:22:58.228059507 +0000 UTC m=+14.970043679"
	Feb 23 22:22:58 multinode-359000 kubelet[2137]: I0223 22:22:58.672194    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lkkx4" podStartSLOduration=3.672166195 pod.CreationTimestamp="2023-02-23 22:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:58.228028265 +0000 UTC m=+14.970012439" watchObservedRunningTime="2023-02-23 22:22:58.672166195 +0000 UTC m=+15.414150373"
	Feb 23 22:22:59 multinode-359000 kubelet[2137]: I0223 22:22:59.069543    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-4rfn2" podStartSLOduration=4.069498635 pod.CreationTimestamp="2023-02-23 22:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:58.672393657 +0000 UTC m=+15.414377836" watchObservedRunningTime="2023-02-23 22:22:59.069498635 +0000 UTC m=+15.811482808"
	Feb 23 22:22:59 multinode-359000 kubelet[2137]: I0223 22:22:59.691160    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.691134144 pod.CreationTimestamp="2023-02-23 22:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:59.06975258 +0000 UTC m=+15.811736760" watchObservedRunningTime="2023-02-23 22:22:59.691134144 +0000 UTC m=+16.433118317"
	Feb 23 22:23:04 multinode-359000 kubelet[2137]: I0223 22:23:04.801240    2137 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 23 22:23:04 multinode-359000 kubelet[2137]: I0223 22:23:04.801694    2137 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.792165    2137 scope.go:115] "RemoveContainer" containerID="3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.801977    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-8hs9x" podStartSLOduration=-9.223372021052822e+09 pod.CreationTimestamp="2023-02-23 22:22:55 +0000 UTC" firstStartedPulling="2023-02-23 22:22:56.393870569 +0000 UTC m=+13.135854737" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:59.691259533 +0000 UTC m=+16.433243702" watchObservedRunningTime="2023-02-23 22:23:10.801953048 +0000 UTC m=+27.543937221"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.802495    2137 scope.go:115] "RemoveContainer" containerID="3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: E0223 22:23:10.803260    2137 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8" containerID="3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.803310    2137 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8} err="failed to get container status \"3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8\": rpc error: code = Unknown desc = Error: No such container: 3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.893197    2137 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q7ws\" (UniqueName: \"kubernetes.io/projected/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-kube-api-access-6q7ws\") pod \"1cd33c9e-c0c4-48ac-88d1-a643a0eebc54\" (UID: \"1cd33c9e-c0c4-48ac-88d1-a643a0eebc54\") "
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.893258    2137 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-config-volume\") pod \"1cd33c9e-c0c4-48ac-88d1-a643a0eebc54\" (UID: \"1cd33c9e-c0c4-48ac-88d1-a643a0eebc54\") "
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: W0223 22:23:10.893451    2137 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.893610    2137 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-config-volume" (OuterVolumeSpecName: "config-volume") pod "1cd33c9e-c0c4-48ac-88d1-a643a0eebc54" (UID: "1cd33c9e-c0c4-48ac-88d1-a643a0eebc54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.895084    2137 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-kube-api-access-6q7ws" (OuterVolumeSpecName: "kube-api-access-6q7ws") pod "1cd33c9e-c0c4-48ac-88d1-a643a0eebc54" (UID: "1cd33c9e-c0c4-48ac-88d1-a643a0eebc54"). InnerVolumeSpecName "kube-api-access-6q7ws". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.993557    2137 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-6q7ws\" (UniqueName: \"kubernetes.io/projected/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-kube-api-access-6q7ws\") on node \"multinode-359000\" DevicePath \"\""
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.993628    2137 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-config-volume\") on node \"multinode-359000\" DevicePath \"\""
	Feb 23 22:23:11 multinode-359000 kubelet[2137]: I0223 22:23:11.482961    2137 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1cd33c9e-c0c4-48ac-88d1-a643a0eebc54 path="/var/lib/kubelet/pods/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54/volumes"
	Feb 23 22:23:11 multinode-359000 kubelet[2137]: I0223 22:23:11.809813    2137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf3b8437f58143117fd90eae76df14cd9c62c0581498bbd9a99420c1b6210cc"
	Feb 23 22:23:34 multinode-359000 kubelet[2137]: I0223 22:23:34.919177    2137 topology_manager.go:210] "Topology Admit Handler"
	Feb 23 22:23:34 multinode-359000 kubelet[2137]: E0223 22:23:34.919234    2137 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1cd33c9e-c0c4-48ac-88d1-a643a0eebc54" containerName="coredns"
	Feb 23 22:23:34 multinode-359000 kubelet[2137]: I0223 22:23:34.919258    2137 memory_manager.go:346] "RemoveStaleState removing state" podUID="1cd33c9e-c0c4-48ac-88d1-a643a0eebc54" containerName="coredns"
	Feb 23 22:23:35 multinode-359000 kubelet[2137]: I0223 22:23:35.070806    2137 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8472\" (UniqueName: \"kubernetes.io/projected/e915d92f-ced7-45d8-9cde-6049a324e6f5-kube-api-access-j8472\") pod \"busybox-6b86dd6d48-ghfsb\" (UID: \"e915d92f-ced7-45d8-9cde-6049a324e6f5\") " pod="default/busybox-6b86dd6d48-ghfsb"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-359000 -n multinode-359000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-359000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (9.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikbue host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- sh -c "ping -c 1 <nil>"
multinode_test.go:558: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-9zw45 -- sh -c "ping -c 1 <nil>": exit status 2 (161.857826ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:559: Failed to ping host (<nil>) from pod (busybox-6b86dd6d48-9zw45): exit status 2
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-ghfsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-359000 -- exec busybox-6b86dd6d48-ghfsb -- sh -c "ping -c 1 192.168.65.2"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-359000
helpers_test.go:235: (dbg) docker inspect multinode-359000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4",
	        "Created": "2023-02-23T22:22:25.825690898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 92023,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:22:26.110802915Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4/hosts",
	        "LogPath": "/var/lib/docker/containers/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4/35fabfa71c4d986b310a8326ab076114e2f237bb41fec2615956993c06fbf7d4-json.log",
	        "Name": "/multinode-359000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-359000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-359000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b37c88dee6e8f718050d9c3a882d1a738a21392f3c214bc2ae682d08a8c774bd-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b37c88dee6e8f718050d9c3a882d1a738a21392f3c214bc2ae682d08a8c774bd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b37c88dee6e8f718050d9c3a882d1a738a21392f3c214bc2ae682d08a8c774bd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b37c88dee6e8f718050d9c3a882d1a738a21392f3c214bc2ae682d08a8c774bd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-359000",
	                "Source": "/var/lib/docker/volumes/multinode-359000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-359000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-359000",
	                "name.minikube.sigs.k8s.io": "multinode-359000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a41ba8123e07116bee7f51c22243e8946c5457cdfd3d10fa3a4cddc3a333965",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58730"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58731"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58732"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58733"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58734"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5a41ba8123e0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-359000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "35fabfa71c4d",
	                        "multinode-359000"
	                    ],
	                    "NetworkID": "eb5aa03044a362392a7a3116bd1898165c0320685f48ef9fd4102df2baf38b21",
	                    "EndpointID": "229a2d32b60fa9d4b1b657244e8662288ca0eb664054d671cb85c2a7d04688ad",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-359000 -n multinode-359000
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-359000 logs -n 25: (2.48321874s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-004000                           | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:21 PST | 23 Feb 23 14:22 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-004000 ssh -- ls                    | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-990000                           | mount-start-1-990000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-004000 ssh -- ls                    | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-004000                           | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	| start   | -p mount-start-2-004000                           | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	| ssh     | mount-start-2-004000 ssh -- ls                    | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-004000                           | mount-start-2-004000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	| delete  | -p mount-start-1-990000                           | mount-start-1-990000 | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:22 PST |
	| start   | -p multinode-359000                               | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:22 PST | 23 Feb 23 14:23 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- apply -f                   | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- rollout                    | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- get pods -o                | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- get pods -o                | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST |                     |
	|         | busybox-6b86dd6d48-9zw45 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-ghfsb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST |                     |
	|         | busybox-6b86dd6d48-9zw45 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-ghfsb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST |                     |
	|         | busybox-6b86dd6d48-9zw45 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-ghfsb -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- get pods -o                | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-9zw45                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST |                     |
	|         | busybox-6b86dd6d48-9zw45 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 <nil>                                |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-ghfsb                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-359000 -- exec                       | multinode-359000     | jenkins | v1.29.0 | 23 Feb 23 14:23 PST | 23 Feb 23 14:23 PST |
	|         | busybox-6b86dd6d48-ghfsb -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.65.2                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 14:22:17
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 14:22:17.997568   20778 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:22:17.997723   20778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:22:17.997728   20778 out.go:309] Setting ErrFile to fd 2...
	I0223 14:22:17.997732   20778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:22:17.997857   20778 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:22:17.999185   20778 out.go:303] Setting JSON to false
	I0223 14:22:18.017507   20778 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6713,"bootTime":1677184225,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:22:18.017590   20778 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:22:18.039653   20778 out.go:177] * [multinode-359000] minikube v1.29.0 on Darwin 13.2
	I0223 14:22:18.083897   20778 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:22:18.083893   20778 notify.go:220] Checking for updates...
	I0223 14:22:18.105726   20778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:18.127870   20778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:22:18.149849   20778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:22:18.171667   20778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:22:18.192824   20778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:22:18.215094   20778 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:22:18.276848   20778 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:22:18.276960   20778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:22:18.420106   20778 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 22:22:18.326028873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:22:18.462050   20778 out.go:177] * Using the docker driver based on user configuration
	I0223 14:22:18.483031   20778 start.go:296] selected driver: docker
	I0223 14:22:18.483049   20778 start.go:857] validating driver "docker" against <nil>
	I0223 14:22:18.483059   20778 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:22:18.485614   20778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:22:18.627078   20778 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 22:22:18.53469381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:22:18.627216   20778 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 14:22:18.627401   20778 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 14:22:18.648439   20778 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 14:22:18.669475   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:22:18.669503   20778 cni.go:136] 0 nodes found, recommending kindnet
	I0223 14:22:18.669514   20778 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 14:22:18.669536   20778 start_flags.go:319] config:
	{Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:22:18.713150   20778 out.go:177] * Starting control plane node multinode-359000 in cluster multinode-359000
	I0223 14:22:18.734528   20778 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:22:18.756488   20778 out.go:177] * Pulling base image ...
	I0223 14:22:18.799601   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:22:18.799662   20778 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:22:18.799701   20778 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 14:22:18.799721   20778 cache.go:57] Caching tarball of preloaded images
	I0223 14:22:18.799932   20778 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 14:22:18.799951   20778 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 14:22:18.802483   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:22:18.802534   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json: {Name:mk48cc9f4da0284d12aeeaf021c24cd89028c83b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:18.855090   20778 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:22:18.855109   20778 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:22:18.855128   20778 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:22:18.855168   20778 start.go:364] acquiring machines lock for multinode-359000: {Name:mk4618dcf142341b2bdb2e619b88566b84020269 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:22:18.855322   20778 start.go:368] acquired machines lock for "multinode-359000" in 141.911µs
	I0223 14:22:18.855365   20778 start.go:93] Provisioning new machine with config: &{Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 14:22:18.855415   20778 start.go:125] createHost starting for "" (driver="docker")
	I0223 14:22:18.878367   20778 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 14:22:18.878806   20778 start.go:159] libmachine.API.Create for "multinode-359000" (driver="docker")
	I0223 14:22:18.878852   20778 client.go:168] LocalClient.Create starting
	I0223 14:22:18.879060   20778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem
	I0223 14:22:18.879150   20778 main.go:141] libmachine: Decoding PEM data...
	I0223 14:22:18.879185   20778 main.go:141] libmachine: Parsing certificate...
	I0223 14:22:18.879303   20778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem
	I0223 14:22:18.879369   20778 main.go:141] libmachine: Decoding PEM data...
	I0223 14:22:18.879388   20778 main.go:141] libmachine: Parsing certificate...
	I0223 14:22:18.880263   20778 cli_runner.go:164] Run: docker network inspect multinode-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 14:22:18.935792   20778 cli_runner.go:211] docker network inspect multinode-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 14:22:18.935899   20778 network_create.go:281] running [docker network inspect multinode-359000] to gather additional debugging logs...
	I0223 14:22:18.935917   20778 cli_runner.go:164] Run: docker network inspect multinode-359000
	W0223 14:22:18.989285   20778 cli_runner.go:211] docker network inspect multinode-359000 returned with exit code 1
	I0223 14:22:18.989313   20778 network_create.go:284] error running [docker network inspect multinode-359000]: docker network inspect multinode-359000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-359000
	I0223 14:22:18.989328   20778 network_create.go:286] output of [docker network inspect multinode-359000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-359000
	
	** /stderr **
	I0223 14:22:18.989400   20778 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 14:22:19.045130   20778 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 14:22:19.045485   20778 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006ef790}
	I0223 14:22:19.045498   20778 network_create.go:123] attempt to create docker network multinode-359000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 14:22:19.045572   20778 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-359000 multinode-359000
	I0223 14:22:19.132062   20778 network_create.go:107] docker network multinode-359000 192.168.58.0/24 created
	I0223 14:22:19.132091   20778 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-359000" container
	I0223 14:22:19.132246   20778 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 14:22:19.186179   20778 cli_runner.go:164] Run: docker volume create multinode-359000 --label name.minikube.sigs.k8s.io=multinode-359000 --label created_by.minikube.sigs.k8s.io=true
	I0223 14:22:19.240886   20778 oci.go:103] Successfully created a docker volume multinode-359000
	I0223 14:22:19.241033   20778 cli_runner.go:164] Run: docker run --rm --name multinode-359000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-359000 --entrypoint /usr/bin/test -v multinode-359000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 14:22:19.665975   20778 oci.go:107] Successfully prepared a docker volume multinode-359000
	I0223 14:22:19.666026   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:22:19.666040   20778 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 14:22:19.666151   20778 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-359000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 14:22:25.631787   20778 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-359000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (5.965534313s)
	I0223 14:22:25.631808   20778 kic.go:199] duration metric: took 5.965735 seconds to extract preloaded images to volume
	I0223 14:22:25.631932   20778 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 14:22:25.772664   20778 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-359000 --name multinode-359000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-359000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-359000 --network multinode-359000 --ip 192.168.58.2 --volume multinode-359000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 14:22:26.119144   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Running}}
	I0223 14:22:26.178445   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:26.240245   20778 cli_runner.go:164] Run: docker exec multinode-359000 stat /var/lib/dpkg/alternatives/iptables
	I0223 14:22:26.348772   20778 oci.go:144] the created container "multinode-359000" has a running status.
	I0223 14:22:26.348801   20778 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa...
	I0223 14:22:26.567287   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 14:22:26.567362   20778 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 14:22:26.670716   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:26.727049   20778 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 14:22:26.727069   20778 kic_runner.go:114] Args: [docker exec --privileged multinode-359000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 14:22:26.831090   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:26.886450   20778 machine.go:88] provisioning docker machine ...
	I0223 14:22:26.886505   20778 ubuntu.go:169] provisioning hostname "multinode-359000"
	I0223 14:22:26.886606   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:26.972174   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:26.972577   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:26.972595   20778 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-359000 && echo "multinode-359000" | sudo tee /etc/hostname
	I0223 14:22:27.114547   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-359000
	
	I0223 14:22:27.114642   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:27.170811   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:27.171170   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:27.171184   20778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-359000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-359000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-359000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:22:27.305059   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:22:27.305087   20778 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:22:27.305103   20778 ubuntu.go:177] setting up certificates
	I0223 14:22:27.305108   20778 provision.go:83] configureAuth start
	I0223 14:22:27.305183   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000
	I0223 14:22:27.361348   20778 provision.go:138] copyHostCerts
	I0223 14:22:27.361394   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:22:27.361454   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:22:27.361463   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:22:27.361583   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:22:27.361754   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:22:27.361785   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:22:27.361790   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:22:27.361854   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:22:27.361973   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:22:27.362008   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:22:27.362012   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:22:27.362074   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:22:27.362198   20778 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.multinode-359000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-359000]
	I0223 14:22:27.440484   20778 provision.go:172] copyRemoteCerts
	I0223 14:22:27.440541   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:22:27.440600   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:27.497303   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:27.590994   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 14:22:27.591094   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:22:27.607997   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 14:22:27.608078   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 14:22:27.624915   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 14:22:27.624995   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 14:22:27.641981   20778 provision.go:86] duration metric: configureAuth took 336.859361ms
	I0223 14:22:27.641996   20778 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:22:27.642158   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:22:27.642228   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:27.698110   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:27.698461   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:27.698477   20778 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:22:27.831738   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:22:27.831759   20778 ubuntu.go:71] root file system type: overlay
	I0223 14:22:27.831846   20778 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:22:27.831933   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:27.889076   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:27.889472   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:27.889521   20778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:22:28.033987   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:22:28.034099   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:28.090960   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:22:28.091310   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58730 <nil> <nil>}
	I0223 14:22:28.091323   20778 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:22:28.692040   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:22:28.032036957 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 14:22:28.692065   20778 machine.go:91] provisioned docker machine in 1.805585984s
	I0223 14:22:28.692071   20778 client.go:171] LocalClient.Create took 9.813156767s
	I0223 14:22:28.692087   20778 start.go:167] duration metric: libmachine.API.Create for "multinode-359000" took 9.813228955s
	I0223 14:22:28.692096   20778 start.go:300] post-start starting for "multinode-359000" (driver="docker")
	I0223 14:22:28.692101   20778 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:22:28.692176   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:22:28.692231   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:28.750799   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:28.847120   20778 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:22:28.850824   20778 command_runner.go:130] > NAME="Ubuntu"
	I0223 14:22:28.850833   20778 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 14:22:28.850837   20778 command_runner.go:130] > ID=ubuntu
	I0223 14:22:28.850853   20778 command_runner.go:130] > ID_LIKE=debian
	I0223 14:22:28.850864   20778 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 14:22:28.850868   20778 command_runner.go:130] > VERSION_ID="20.04"
	I0223 14:22:28.850872   20778 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 14:22:28.850877   20778 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 14:22:28.850881   20778 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 14:22:28.850894   20778 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 14:22:28.850898   20778 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 14:22:28.850902   20778 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 14:22:28.850946   20778 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:22:28.850960   20778 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:22:28.850966   20778 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:22:28.850971   20778 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:22:28.850981   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:22:28.851081   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:22:28.851269   20778 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:22:28.851276   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /etc/ssl/certs/152102.pem
	I0223 14:22:28.851475   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:22:28.858643   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:22:28.875607   20778 start.go:303] post-start completed in 183.501003ms
	I0223 14:22:28.876132   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000
	I0223 14:22:28.933182   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:22:28.933592   20778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:22:28.933655   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:28.989909   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:29.081987   20778 command_runner.go:130] > 9%!
	(MISSING)I0223 14:22:29.082063   20778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:22:29.086343   20778 command_runner.go:130] > 51G
	I0223 14:22:29.086706   20778 start.go:128] duration metric: createHost completed in 10.231226324s
	I0223 14:22:29.086721   20778 start.go:83] releasing machines lock for "multinode-359000", held for 10.231334204s
	I0223 14:22:29.086800   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000
	I0223 14:22:29.143690   20778 ssh_runner.go:195] Run: cat /version.json
	I0223 14:22:29.143707   20778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 14:22:29.143766   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:29.143782   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:29.202449   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:29.203618   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:29.345068   20778 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 14:22:29.346336   20778 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0223 14:22:29.346461   20778 ssh_runner.go:195] Run: systemctl --version
	I0223 14:22:29.350887   20778 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0223 14:22:29.350907   20778 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0223 14:22:29.351192   20778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 14:22:29.356299   20778 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 14:22:29.356312   20778 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 14:22:29.356321   20778 command_runner.go:130] > Device: a6h/166d	Inode: 269040      Links: 1
	I0223 14:22:29.356330   20778 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 14:22:29.356338   20778 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 14:22:29.356342   20778 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 14:22:29.356346   20778 command_runner.go:130] > Change: 2023-02-23 21:59:23.933961994 +0000
	I0223 14:22:29.356350   20778 command_runner.go:130] >  Birth: -
	I0223 14:22:29.356645   20778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 14:22:29.376246   20778 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 14:22:29.376339   20778 ssh_runner.go:195] Run: which cri-dockerd
	I0223 14:22:29.379965   20778 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 14:22:29.380147   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 14:22:29.387517   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 14:22:29.399953   20778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 14:22:29.414326   20778 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 14:22:29.414354   20778 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 14:22:29.414368   20778 start.go:485] detecting cgroup driver to use...
	I0223 14:22:29.414380   20778 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:22:29.414464   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:22:29.426735   20778 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 14:22:29.426747   20778 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 14:22:29.427597   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 14:22:29.436078   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:22:29.444330   20778 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:22:29.444382   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:22:29.452683   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:22:29.460836   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:22:29.469038   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:22:29.477245   20778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:22:29.485018   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:22:29.493417   20778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:22:29.499949   20778 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 14:22:29.500616   20778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:22:29.507472   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:22:29.570144   20778 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:22:29.645730   20778 start.go:485] detecting cgroup driver to use...
	I0223 14:22:29.645748   20778 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:22:29.645806   20778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:22:29.655468   20778 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 14:22:29.655662   20778 command_runner.go:130] > [Unit]
	I0223 14:22:29.655677   20778 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 14:22:29.655685   20778 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 14:22:29.655689   20778 command_runner.go:130] > BindsTo=containerd.service
	I0223 14:22:29.655695   20778 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 14:22:29.655700   20778 command_runner.go:130] > Wants=network-online.target
	I0223 14:22:29.655706   20778 command_runner.go:130] > Requires=docker.socket
	I0223 14:22:29.655710   20778 command_runner.go:130] > StartLimitBurst=3
	I0223 14:22:29.655714   20778 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 14:22:29.655717   20778 command_runner.go:130] > [Service]
	I0223 14:22:29.655720   20778 command_runner.go:130] > Type=notify
	I0223 14:22:29.655724   20778 command_runner.go:130] > Restart=on-failure
	I0223 14:22:29.655730   20778 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 14:22:29.655739   20778 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 14:22:29.655744   20778 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 14:22:29.655749   20778 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 14:22:29.655756   20778 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 14:22:29.655763   20778 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 14:22:29.655769   20778 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 14:22:29.655779   20778 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 14:22:29.655787   20778 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 14:22:29.655793   20778 command_runner.go:130] > ExecStart=
	I0223 14:22:29.655810   20778 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 14:22:29.655815   20778 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 14:22:29.655820   20778 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 14:22:29.655825   20778 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 14:22:29.655828   20778 command_runner.go:130] > LimitNOFILE=infinity
	I0223 14:22:29.655832   20778 command_runner.go:130] > LimitNPROC=infinity
	I0223 14:22:29.655835   20778 command_runner.go:130] > LimitCORE=infinity
	I0223 14:22:29.655839   20778 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 14:22:29.655844   20778 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 14:22:29.655848   20778 command_runner.go:130] > TasksMax=infinity
	I0223 14:22:29.655851   20778 command_runner.go:130] > TimeoutStartSec=0
	I0223 14:22:29.655856   20778 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 14:22:29.655860   20778 command_runner.go:130] > Delegate=yes
	I0223 14:22:29.655866   20778 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 14:22:29.655869   20778 command_runner.go:130] > KillMode=process
	I0223 14:22:29.655876   20778 command_runner.go:130] > [Install]
	I0223 14:22:29.655881   20778 command_runner.go:130] > WantedBy=multi-user.target
	I0223 14:22:29.656318   20778 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:22:29.656376   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:22:29.666346   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:22:29.679412   20778 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 14:22:29.679436   20778 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 14:22:29.680190   20778 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:22:29.785998   20778 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:22:29.846079   20778 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:22:29.846099   20778 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:22:29.875291   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:22:29.936773   20778 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:22:30.178429   20778 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:22:30.245350   20778 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 14:22:30.245421   20778 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 14:22:30.311955   20778 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:22:30.376482   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:22:30.442762   20778 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 14:22:30.453624   20778 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 14:22:30.453706   20778 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 14:22:30.457497   20778 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 14:22:30.457507   20778 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 14:22:30.457512   20778 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0223 14:22:30.457518   20778 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 14:22:30.457525   20778 command_runner.go:130] > Access: 2023-02-23 22:22:30.450036934 +0000
	I0223 14:22:30.457529   20778 command_runner.go:130] > Modify: 2023-02-23 22:22:30.450036934 +0000
	I0223 14:22:30.457534   20778 command_runner.go:130] > Change: 2023-02-23 22:22:30.451036933 +0000
	I0223 14:22:30.457543   20778 command_runner.go:130] >  Birth: -
	I0223 14:22:30.457558   20778 start.go:553] Will wait 60s for crictl version
	I0223 14:22:30.457593   20778 ssh_runner.go:195] Run: which crictl
	I0223 14:22:30.461212   20778 command_runner.go:130] > /usr/bin/crictl
	I0223 14:22:30.461380   20778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 14:22:30.552540   20778 command_runner.go:130] > Version:  0.1.0
	I0223 14:22:30.552553   20778 command_runner.go:130] > RuntimeName:  docker
	I0223 14:22:30.552557   20778 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 14:22:30.552564   20778 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 14:22:30.554412   20778 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 14:22:30.554492   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:22:30.578365   20778 command_runner.go:130] > 23.0.1
	I0223 14:22:30.579998   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:22:30.602894   20778 command_runner.go:130] > 23.0.1
	I0223 14:22:30.651359   20778 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 14:22:30.651602   20778 cli_runner.go:164] Run: docker exec -t multinode-359000 dig +short host.docker.internal
	I0223 14:22:30.770066   20778 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:22:30.770179   20778 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:22:30.774553   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:22:30.784573   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:30.841215   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:22:30.841297   20778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:22:30.861049   20778 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 14:22:30.861071   20778 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 14:22:30.861075   20778 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 14:22:30.861081   20778 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 14:22:30.861086   20778 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 14:22:30.861091   20778 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 14:22:30.861095   20778 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 14:22:30.861102   20778 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 14:22:30.861138   20778 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 14:22:30.861150   20778 docker.go:560] Images already preloaded, skipping extraction
	I0223 14:22:30.861250   20778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:22:30.880076   20778 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 14:22:30.880088   20778 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 14:22:30.880093   20778 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 14:22:30.880098   20778 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 14:22:30.880104   20778 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 14:22:30.880109   20778 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 14:22:30.880114   20778 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 14:22:30.880121   20778 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 14:22:30.881672   20778 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 14:22:30.881684   20778 cache_images.go:84] Images are preloaded, skipping loading
	I0223 14:22:30.881780   20778 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:22:30.905465   20778 command_runner.go:130] > cgroupfs
	I0223 14:22:30.907014   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:22:30.907027   20778 cni.go:136] 1 nodes found, recommending kindnet
	I0223 14:22:30.907042   20778 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:22:30.907057   20778 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-359000 NodeName:multinode-359000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:22:30.907180   20778 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-359000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:22:30.907244   20778 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-359000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:22:30.907308   20778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 14:22:30.914358   20778 command_runner.go:130] > kubeadm
	I0223 14:22:30.914366   20778 command_runner.go:130] > kubectl
	I0223 14:22:30.914370   20778 command_runner.go:130] > kubelet
	I0223 14:22:30.914948   20778 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:22:30.915008   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 14:22:30.922349   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0223 14:22:30.934893   20778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 14:22:30.947559   20778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0223 14:22:30.960493   20778 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:22:30.964355   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:22:30.974018   20778 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000 for IP: 192.168.58.2
	I0223 14:22:30.974034   20778 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:30.974214   20778 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:22:30.974298   20778 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:22:30.974352   20778 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key
	I0223 14:22:30.974367   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt with IP's: []
	I0223 14:22:31.058307   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt ...
	I0223 14:22:31.058316   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt: {Name:mka52b9e77c478dfe5439016c20d5225efaad9cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.058594   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key ...
	I0223 14:22:31.058601   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key: {Name:mkf0e7dd49748712552fa7819d7d2db125545e50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.058782   20778 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key.cee25041
	I0223 14:22:31.058797   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 14:22:31.127584   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt.cee25041 ...
	I0223 14:22:31.127591   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt.cee25041: {Name:mk5f961080e03220b9f67a4e8170b55a83081e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.127923   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key.cee25041 ...
	I0223 14:22:31.127934   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key.cee25041: {Name:mkce59497be2b7607371982625aeaaad62aa9126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.128139   20778 certs.go:333] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt
	I0223 14:22:31.128295   20778 certs.go:337] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key
	I0223 14:22:31.128459   20778 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key
	I0223 14:22:31.128476   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt with IP's: []
	I0223 14:22:31.244647   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt ...
	I0223 14:22:31.244657   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt: {Name:mk32899bae51507ea9dcc625c110d92663d55316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.244911   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key ...
	I0223 14:22:31.244919   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key: {Name:mk5cdc2c98d324e290734ba0dd697285f9a4e252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:31.245116   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 14:22:31.245148   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 14:22:31.245170   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 14:22:31.245194   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 14:22:31.245215   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 14:22:31.245236   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 14:22:31.245255   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 14:22:31.245276   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 14:22:31.245374   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:22:31.245426   20778 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:22:31.245439   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:22:31.245477   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:22:31.245511   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:22:31.245542   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:22:31.245618   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:22:31.245650   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem -> /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.245680   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.245703   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.246268   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 14:22:31.264226   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 14:22:31.281261   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 14:22:31.298018   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 14:22:31.315069   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:22:31.332043   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:22:31.348982   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:22:31.366571   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:22:31.383857   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:22:31.400819   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:22:31.417731   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:22:31.434612   20778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 14:22:31.447186   20778 ssh_runner.go:195] Run: openssl version
	I0223 14:22:31.452279   20778 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 14:22:31.452666   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:22:31.460822   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.464726   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.464859   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.464901   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:22:31.469974   20778 command_runner.go:130] > b5213941
	I0223 14:22:31.470306   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:22:31.478394   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:22:31.486497   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.490349   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.490446   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.490494   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:22:31.495755   20778 command_runner.go:130] > 51391683
	I0223 14:22:31.495989   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:22:31.504022   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:22:31.511980   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.515839   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.515984   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.516030   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:22:31.521095   20778 command_runner.go:130] > 3ec20f2e
	I0223 14:22:31.521439   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:22:31.529353   20778 kubeadm.go:401] StartCluster: {Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:22:31.529455   20778 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:22:31.548402   20778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 14:22:31.556334   20778 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0223 14:22:31.556346   20778 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0223 14:22:31.556351   20778 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0223 14:22:31.556412   20778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 14:22:31.563836   20778 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 14:22:31.563891   20778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:22:31.571109   20778 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 14:22:31.571121   20778 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 14:22:31.571127   20778 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 14:22:31.571150   20778 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:22:31.571175   20778 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:22:31.571195   20778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 14:22:31.622387   20778 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 14:22:31.622401   20778 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0223 14:22:31.622432   20778 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 14:22:31.622437   20778 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 14:22:31.726583   20778 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:22:31.726595   20778 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:22:31.726669   20778 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:22:31.726680   20778 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:22:31.726763   20778 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:22:31.726770   20778 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:22:31.853732   20778 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:22:31.853745   20778 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:22:31.875623   20778 out.go:204]   - Generating certificates and keys ...
	I0223 14:22:31.875691   20778 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 14:22:31.875718   20778 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 14:22:31.875784   20778 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 14:22:31.875796   20778 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 14:22:31.918241   20778 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 14:22:31.918250   20778 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 14:22:32.160845   20778 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 14:22:32.160859   20778 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0223 14:22:32.470893   20778 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 14:22:32.470918   20778 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0223 14:22:32.540261   20778 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 14:22:32.540269   20778 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0223 14:22:32.773035   20778 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 14:22:32.773050   20778 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0223 14:22:32.773252   20778 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-359000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 14:22:32.773263   20778 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-359000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 14:22:32.999464   20778 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 14:22:32.999473   20778 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0223 14:22:33.020525   20778 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-359000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 14:22:33.020536   20778 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-359000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 14:22:33.110920   20778 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 14:22:33.110926   20778 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 14:22:33.222781   20778 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 14:22:33.222791   20778 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 14:22:33.369263   20778 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 14:22:33.369275   20778 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0223 14:22:33.369317   20778 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:22:33.369328   20778 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:22:33.503536   20778 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:22:33.503551   20778 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:22:33.596312   20778 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:22:33.596328   20778 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:22:33.813896   20778 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:22:33.813908   20778 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:22:33.967258   20778 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:22:33.967271   20778 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:22:33.977422   20778 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:22:33.977439   20778 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:22:33.978095   20778 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:22:33.978101   20778 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:22:33.978133   20778 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 14:22:33.978139   20778 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 14:22:34.049610   20778 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:22:34.049621   20778 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:22:34.071313   20778 out.go:204]   - Booting up control plane ...
	I0223 14:22:34.071390   20778 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:22:34.071399   20778 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:22:34.071473   20778 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:22:34.071479   20778 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:22:34.071536   20778 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:22:34.071548   20778 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:22:34.071627   20778 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:22:34.071634   20778 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:22:34.071761   20778 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:22:34.071768   20778 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:22:42.058654   20778 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002264 seconds
	I0223 14:22:42.058678   20778 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.002264 seconds
	I0223 14:22:42.058823   20778 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 14:22:42.058830   20778 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 14:22:42.066794   20778 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 14:22:42.066812   20778 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 14:22:42.583785   20778 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 14:22:42.583795   20778 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0223 14:22:42.583942   20778 kubeadm.go:322] [mark-control-plane] Marking the node multinode-359000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 14:22:42.583949   20778 command_runner.go:130] > [mark-control-plane] Marking the node multinode-359000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 14:22:43.092233   20778 kubeadm.go:322] [bootstrap-token] Using token: a3m378.esw3wxqjqraswiei
	I0223 14:22:43.092252   20778 command_runner.go:130] > [bootstrap-token] Using token: a3m378.esw3wxqjqraswiei
	I0223 14:22:43.129545   20778 out.go:204]   - Configuring RBAC rules ...
	I0223 14:22:43.129705   20778 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 14:22:43.129719   20778 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 14:22:43.131996   20778 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 14:22:43.132005   20778 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 14:22:43.136844   20778 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 14:22:43.136858   20778 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 14:22:43.138991   20778 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 14:22:43.139005   20778 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 14:22:43.141911   20778 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 14:22:43.141928   20778 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 14:22:43.144405   20778 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 14:22:43.144417   20778 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 14:22:43.152145   20778 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 14:22:43.152161   20778 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 14:22:43.283855   20778 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 14:22:43.283869   20778 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 14:22:43.570405   20778 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 14:22:43.570428   20778 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 14:22:43.570787   20778 kubeadm.go:322] 
	I0223 14:22:43.570836   20778 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 14:22:43.570845   20778 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0223 14:22:43.570854   20778 kubeadm.go:322] 
	I0223 14:22:43.570923   20778 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 14:22:43.570932   20778 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0223 14:22:43.570941   20778 kubeadm.go:322] 
	I0223 14:22:43.570964   20778 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 14:22:43.570973   20778 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0223 14:22:43.571016   20778 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 14:22:43.571022   20778 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 14:22:43.571068   20778 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 14:22:43.571077   20778 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 14:22:43.571082   20778 kubeadm.go:322] 
	I0223 14:22:43.571126   20778 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 14:22:43.571134   20778 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0223 14:22:43.571143   20778 kubeadm.go:322] 
	I0223 14:22:43.571191   20778 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 14:22:43.571197   20778 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 14:22:43.571201   20778 kubeadm.go:322] 
	I0223 14:22:43.571253   20778 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 14:22:43.571261   20778 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0223 14:22:43.571328   20778 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 14:22:43.571335   20778 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 14:22:43.571395   20778 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 14:22:43.571399   20778 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 14:22:43.571408   20778 kubeadm.go:322] 
	I0223 14:22:43.571484   20778 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 14:22:43.571490   20778 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0223 14:22:43.571552   20778 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 14:22:43.571558   20778 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0223 14:22:43.571567   20778 kubeadm.go:322] 
	I0223 14:22:43.571634   20778 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token a3m378.esw3wxqjqraswiei \
	I0223 14:22:43.571638   20778 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a3m378.esw3wxqjqraswiei \
	I0223 14:22:43.571719   20778 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 \
	I0223 14:22:43.571721   20778 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 \
	I0223 14:22:43.571742   20778 command_runner.go:130] > 	--control-plane 
	I0223 14:22:43.571747   20778 kubeadm.go:322] 	--control-plane 
	I0223 14:22:43.571755   20778 kubeadm.go:322] 
	I0223 14:22:43.571823   20778 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0223 14:22:43.571824   20778 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 14:22:43.571833   20778 kubeadm.go:322] 
	I0223 14:22:43.571909   20778 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a3m378.esw3wxqjqraswiei \
	I0223 14:22:43.571918   20778 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a3m378.esw3wxqjqraswiei \
	I0223 14:22:43.572005   20778 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 
	I0223 14:22:43.572012   20778 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 
	I0223 14:22:43.575110   20778 kubeadm.go:322] W0223 22:22:31.615362    1296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 14:22:43.575115   20778 command_runner.go:130] ! W0223 22:22:31.615362    1296 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 14:22:43.575244   20778 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 14:22:43.575257   20778 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 14:22:43.575362   20778 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:22:43.575371   20778 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:22:43.575384   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:22:43.575393   20778 cni.go:136] 1 nodes found, recommending kindnet
	I0223 14:22:43.636248   20778 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 14:22:43.658313   20778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 14:22:43.664670   20778 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 14:22:43.664687   20778 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 14:22:43.664695   20778 command_runner.go:130] > Device: a6h/166d	Inode: 267127      Links: 1
	I0223 14:22:43.664703   20778 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 14:22:43.664715   20778 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 14:22:43.664723   20778 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 14:22:43.664729   20778 command_runner.go:130] > Change: 2023-02-23 21:59:23.284856714 +0000
	I0223 14:22:43.664734   20778 command_runner.go:130] >  Birth: -
	I0223 14:22:43.664784   20778 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 14:22:43.664795   20778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 14:22:43.678783   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 14:22:44.187994   20778 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0223 14:22:44.192238   20778 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0223 14:22:44.198612   20778 command_runner.go:130] > serviceaccount/kindnet created
	I0223 14:22:44.205598   20778 command_runner.go:130] > daemonset.apps/kindnet created
	I0223 14:22:44.211476   20778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 14:22:44.211563   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:44.211565   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0 minikube.k8s.io/name=multinode-359000 minikube.k8s.io/updated_at=2023_02_23T14_22_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:44.219349   20778 command_runner.go:130] > -16
	I0223 14:22:44.219386   20778 ops.go:34] apiserver oom_adj: -16
	I0223 14:22:44.308996   20778 command_runner.go:130] > node/multinode-359000 labeled
	I0223 14:22:44.309042   20778 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0223 14:22:44.309151   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:44.388721   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:44.888914   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:44.951937   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:45.390933   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:45.450655   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:45.891037   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:45.956320   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:46.389803   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:46.452876   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:46.890085   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:46.954791   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:47.389812   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:47.453029   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:47.891091   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:47.951811   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:48.389152   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:48.453297   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:48.891103   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:48.956666   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:49.390010   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:49.454455   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:49.890013   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:49.954025   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:50.390149   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:50.454538   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:50.889982   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:50.950898   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:51.389860   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:51.450032   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:51.890910   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:51.955754   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:52.389061   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:52.481858   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:52.889693   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:52.950050   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:53.389031   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:53.452988   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:53.890471   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:53.952957   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:54.390475   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:54.452099   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:54.889587   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:55.008076   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:55.389115   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:55.455382   20778 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 14:22:55.889025   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 14:22:55.947614   20778 command_runner.go:130] > NAME      SECRETS   AGE
	I0223 14:22:55.947633   20778 command_runner.go:130] > default   0         0s
	I0223 14:22:55.950686   20778 kubeadm.go:1073] duration metric: took 11.739123889s to wait for elevateKubeSystemPrivileges.
	I0223 14:22:55.950705   20778 kubeadm.go:403] StartCluster complete in 24.421221883s
	I0223 14:22:55.950722   20778 settings.go:142] acquiring lock: {Name:mk5254606ab776d081c4c857df8d4e00b86fee57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:55.950813   20778 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:55.951298   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/kubeconfig: {Name:mk366c13f6069774a57c4d74123d5172c8522a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:22:55.951575   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 14:22:55.951593   20778 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 14:22:55.951656   20778 addons.go:65] Setting storage-provisioner=true in profile "multinode-359000"
	I0223 14:22:55.951677   20778 addons.go:227] Setting addon storage-provisioner=true in "multinode-359000"
	I0223 14:22:55.951681   20778 addons.go:65] Setting default-storageclass=true in profile "multinode-359000"
	I0223 14:22:55.951709   20778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-359000"
	I0223 14:22:55.951721   20778 host.go:66] Checking if "multinode-359000" exists ...
	I0223 14:22:55.951729   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:22:55.951797   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:55.951982   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:55.952053   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:55.952056   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:22:55.956474   20778 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 14:22:55.956759   20778 round_trippers.go:463] GET https://127.0.0.1:58734/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 14:22:55.956769   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:55.956777   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:55.956782   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:55.965593   20778 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0223 14:22:55.965610   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:55.965616   20778 round_trippers.go:580]     Audit-Id: 9755b856-a8b2-4aa2-922a-a5a3c26ffa99
	I0223 14:22:55.965621   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:55.965626   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:55.965630   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:55.965635   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:55.965640   20778 round_trippers.go:580]     Content-Length: 291
	I0223 14:22:55.965644   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:55 GMT
	I0223 14:22:55.965667   20778 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"324","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 14:22:55.966004   20778 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"324","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 14:22:55.966030   20778 round_trippers.go:463] PUT https://127.0.0.1:58734/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 14:22:55.966035   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:55.966041   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:55.966050   20778 round_trippers.go:473]     Content-Type: application/json
	I0223 14:22:55.966077   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:55.971574   20778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 14:22:55.971605   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:55.971615   20778 round_trippers.go:580]     Audit-Id: bb70b539-cee3-4d6c-bfb5-0bc20b00b073
	I0223 14:22:55.971623   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:55.971631   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:55.971639   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:55.971646   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:55.971655   20778 round_trippers.go:580]     Content-Length: 291
	I0223 14:22:55.971678   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:55 GMT
	I0223 14:22:55.971709   20778 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"337","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0223 14:22:56.022076   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:56.044690   20778 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 14:22:56.044967   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:22:56.065956   20778 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 14:22:56.065973   20778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 14:22:56.066086   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:56.067183   20778 round_trippers.go:463] GET https://127.0.0.1:58734/apis/storage.k8s.io/v1/storageclasses
	I0223 14:22:56.067248   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:56.067271   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:56.067285   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:56.070562   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:22:56.070594   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:56.070605   20778 round_trippers.go:580]     Audit-Id: 703452e3-e644-4b28-a2e0-31732cff6011
	I0223 14:22:56.070616   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:56.070627   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:56.070636   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:56.070669   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:56.070676   20778 round_trippers.go:580]     Content-Length: 109
	I0223 14:22:56.070681   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:56 GMT
	I0223 14:22:56.070710   20778 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"350"},"items":[]}
	I0223 14:22:56.071158   20778 addons.go:227] Setting addon default-storageclass=true in "multinode-359000"
	I0223 14:22:56.071185   20778 host.go:66] Checking if "multinode-359000" exists ...
	I0223 14:22:56.071743   20778 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:22:56.096240   20778 command_runner.go:130] > apiVersion: v1
	I0223 14:22:56.096268   20778 command_runner.go:130] > data:
	I0223 14:22:56.096275   20778 command_runner.go:130] >   Corefile: |
	I0223 14:22:56.096284   20778 command_runner.go:130] >     .:53 {
	I0223 14:22:56.096291   20778 command_runner.go:130] >         errors
	I0223 14:22:56.096301   20778 command_runner.go:130] >         health {
	I0223 14:22:56.096310   20778 command_runner.go:130] >            lameduck 5s
	I0223 14:22:56.096321   20778 command_runner.go:130] >         }
	I0223 14:22:56.096332   20778 command_runner.go:130] >         ready
	I0223 14:22:56.096347   20778 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 14:22:56.096356   20778 command_runner.go:130] >            pods insecure
	I0223 14:22:56.096365   20778 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 14:22:56.096377   20778 command_runner.go:130] >            ttl 30
	I0223 14:22:56.096387   20778 command_runner.go:130] >         }
	I0223 14:22:56.096398   20778 command_runner.go:130] >         prometheus :9153
	I0223 14:22:56.096408   20778 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 14:22:56.096424   20778 command_runner.go:130] >            max_concurrent 1000
	I0223 14:22:56.096434   20778 command_runner.go:130] >         }
	I0223 14:22:56.096440   20778 command_runner.go:130] >         cache 30
	I0223 14:22:56.096448   20778 command_runner.go:130] >         loop
	I0223 14:22:56.096456   20778 command_runner.go:130] >         reload
	I0223 14:22:56.096475   20778 command_runner.go:130] >         loadbalance
	I0223 14:22:56.096487   20778 command_runner.go:130] >     }
	I0223 14:22:56.096495   20778 command_runner.go:130] > kind: ConfigMap
	I0223 14:22:56.096505   20778 command_runner.go:130] > metadata:
	I0223 14:22:56.096513   20778 command_runner.go:130] >   creationTimestamp: "2023-02-23T22:22:43Z"
	I0223 14:22:56.096517   20778 command_runner.go:130] >   name: coredns
	I0223 14:22:56.096520   20778 command_runner.go:130] >   namespace: kube-system
	I0223 14:22:56.096524   20778 command_runner.go:130] >   resourceVersion: "227"
	I0223 14:22:56.096529   20778 command_runner.go:130] >   uid: 0dcdd836-fb8b-4019-a423-111674db63b0
	I0223 14:22:56.096677   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 14:22:56.136049   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:56.140946   20778 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 14:22:56.140957   20778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 14:22:56.141022   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:56.204480   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:22:56.384454   20778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 14:22:56.471941   20778 round_trippers.go:463] GET https://127.0.0.1:58734/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 14:22:56.471956   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:56.471963   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:56.471968   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:56.474643   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:56.474659   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:56.474665   20778 round_trippers.go:580]     Audit-Id: 6f6367ee-0c5d-4f5d-82ba-3c28cdde7d4b
	I0223 14:22:56.474670   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:56.474674   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:56.474681   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:56.474685   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:56.474690   20778 round_trippers.go:580]     Content-Length: 291
	I0223 14:22:56.474695   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:56 GMT
	I0223 14:22:56.474709   20778 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"357","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 14:22:56.474762   20778 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-359000" context rescaled to 1 replicas
	I0223 14:22:56.474784   20778 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 14:22:56.497137   20778 out.go:177] * Verifying Kubernetes components...
	I0223 14:22:56.491673   20778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 14:22:56.518936   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:22:56.565793   20778 command_runner.go:130] > configmap/coredns replaced
	I0223 14:22:56.573373   20778 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0223 14:22:56.804214   20778 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0223 14:22:56.869433   20778 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0223 14:22:56.879395   20778 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 14:22:56.886010   20778 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 14:22:56.894549   20778 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0223 14:22:56.904426   20778 command_runner.go:130] > pod/storage-provisioner created
	I0223 14:22:56.993382   20778 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0223 14:22:57.000838   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:22:57.064092   20778 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0223 14:22:57.088118   20778 addons.go:492] enable addons completed in 1.136446823s: enabled=[storage-provisioner default-storageclass]
	I0223 14:22:57.098379   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:22:57.098617   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:22:57.098921   20778 node_ready.go:35] waiting up to 6m0s for node "multinode-359000" to be "Ready" ...
	I0223 14:22:57.098971   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:57.098976   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.098984   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.098989   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.101688   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.101704   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.101710   20778 round_trippers.go:580]     Audit-Id: b1325e75-bfa9-4729-8bfe-0d3efdc69ea4
	I0223 14:22:57.101717   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.101724   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.101732   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.101740   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.101744   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.101839   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:57.102385   20778 node_ready.go:49] node "multinode-359000" has status "Ready":"True"
	I0223 14:22:57.102397   20778 node_ready.go:38] duration metric: took 3.45935ms waiting for node "multinode-359000" to be "Ready" ...
	I0223 14:22:57.102408   20778 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 14:22:57.102462   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:22:57.102467   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.102474   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.102479   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.105284   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.105306   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.105318   20778 round_trippers.go:580]     Audit-Id: 76b96b0c-0813-4180-8a2c-e009bc0f8902
	I0223 14:22:57.105330   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.105336   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.105342   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.105349   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.105357   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.107064   20778 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"373"},"items":[{"metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"366","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 60224 chars]
	I0223 14:22:57.109910   20778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace to be "Ready" ...
	I0223 14:22:57.109970   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:57.109980   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.109991   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.110006   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.112565   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.112579   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.112585   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.112590   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.112595   20778 round_trippers.go:580]     Audit-Id: 72de2877-cc47-467a-aa0e-f88257433df4
	I0223 14:22:57.112600   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.112605   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.112613   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.112873   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"366","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 14:22:57.113185   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:57.113194   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.113203   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.113209   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.115459   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.115473   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.115482   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.115487   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.115492   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.115497   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.115501   20778 round_trippers.go:580]     Audit-Id: a98c7b94-e7b7-4e44-9172-e4152cd5312a
	I0223 14:22:57.115508   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.115919   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:57.617462   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:57.617481   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.617489   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.617495   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.620270   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.620287   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.620296   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.620314   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.620326   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.620337   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.620346   20778 round_trippers.go:580]     Audit-Id: 12df3f9f-e02b-486b-b807-d426af0f6a4f
	I0223 14:22:57.620354   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.620436   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"366","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 14:22:57.620739   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:57.620746   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:57.620754   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:57.620764   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:57.623093   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:57.623125   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:57.623139   20778 round_trippers.go:580]     Audit-Id: bb6b1185-383c-447d-b850-3ef227053c52
	I0223 14:22:57.623145   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:57.623150   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:57.623155   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:57.623160   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:57.623165   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:57 GMT
	I0223 14:22:57.623234   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:58.116605   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:58.116619   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:58.116628   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:58.116636   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:58.119319   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:58.119336   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:58.119345   20778 round_trippers.go:580]     Audit-Id: d3e03daa-7868-4d12-8bf5-1a3c43154faa
	I0223 14:22:58.119357   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:58.119371   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:58.119383   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:58.119398   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:58.119408   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:58 GMT
	I0223 14:22:58.119558   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:22:58.119999   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:58.120007   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:58.120013   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:58.120019   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:58.123459   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:22:58.123474   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:58.123481   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:58.123488   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:58.123496   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:58.123503   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:58.123510   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:58 GMT
	I0223 14:22:58.123532   20778 round_trippers.go:580]     Audit-Id: 37f4c48e-dc5c-4ceb-afa8-536d12234f91
	I0223 14:22:58.123633   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:58.616567   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:58.616580   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:58.616586   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:58.616592   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:58.619786   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:22:58.619798   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:58.619804   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:58.619809   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:58.619817   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:58.619823   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:58.619829   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:58 GMT
	I0223 14:22:58.619834   20778 round_trippers.go:580]     Audit-Id: 46b232fc-9c34-4662-bd80-04513f011a74
	I0223 14:22:58.619897   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:22:58.620187   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:58.620193   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:58.620199   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:58.620220   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:58.622357   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:58.622368   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:58.622375   20778 round_trippers.go:580]     Audit-Id: d40a1ea7-d5ee-4eae-8d42-f15c3e5abe59
	I0223 14:22:58.622380   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:58.622386   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:58.622391   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:58.622396   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:58.622402   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:58 GMT
	I0223 14:22:58.622466   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:59.117558   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:59.117583   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:59.117609   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:59.117616   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:59.166085   20778 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0223 14:22:59.166116   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:59.166134   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:59.166149   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:59.166162   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:59 GMT
	I0223 14:22:59.166172   20778 round_trippers.go:580]     Audit-Id: d512c492-116f-4469-8d51-d958daabbc48
	I0223 14:22:59.166187   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:59.166223   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:59.166335   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:22:59.166767   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:59.166780   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:59.166792   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:59.166804   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:59.169614   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:22:59.169627   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:59.169632   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:59.169637   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:59 GMT
	I0223 14:22:59.169642   20778 round_trippers.go:580]     Audit-Id: dfc4ad9b-e930-4cb5-81c7-39fff481e2c0
	I0223 14:22:59.169647   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:59.169653   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:59.169670   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:59.169780   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:22:59.169977   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:22:59.617610   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:22:59.617636   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:59.617648   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:59.617657   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:59.622140   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:22:59.622154   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:59.622160   20778 round_trippers.go:580]     Audit-Id: 876e3638-bdb3-49ea-9b85-e6a8396cafb1
	I0223 14:22:59.622165   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:59.622173   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:59.622179   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:59.622184   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:59.622188   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:59 GMT
	I0223 14:22:59.622353   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:22:59.622656   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:22:59.622664   20778 round_trippers.go:469] Request Headers:
	I0223 14:22:59.622670   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:22:59.622676   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:22:59.626069   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:22:59.626083   20778 round_trippers.go:577] Response Headers:
	I0223 14:22:59.626089   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:22:59.626094   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:22:59.626099   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:22:59.626103   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:22:59 GMT
	I0223 14:22:59.626109   20778 round_trippers.go:580]     Audit-Id: b8645429-be5b-4965-aebc-0d74fa956510
	I0223 14:22:59.626116   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:22:59.626173   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:00.117460   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:00.117481   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:00.117493   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:00.117504   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:00.121949   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:00.121961   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:00.121966   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:00.121972   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:00 GMT
	I0223 14:23:00.121977   20778 round_trippers.go:580]     Audit-Id: f9de4328-090b-4692-840f-d31425d93d2f
	I0223 14:23:00.121982   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:00.121987   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:00.121991   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:00.122056   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:00.122350   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:00.122361   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:00.122370   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:00.122378   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:00.124459   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:00.124468   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:00.124474   20778 round_trippers.go:580]     Audit-Id: 043aea5d-b758-4e31-8026-6ac6e7581dc9
	I0223 14:23:00.124479   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:00.124484   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:00.124489   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:00.124494   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:00.124499   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:00 GMT
	I0223 14:23:00.124560   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:00.616773   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:00.616795   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:00.616808   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:00.616818   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:00.620827   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:00.620842   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:00.620853   20778 round_trippers.go:580]     Audit-Id: bfa5c5d3-f599-4232-8e24-bdb6fa7d6e12
	I0223 14:23:00.620860   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:00.620869   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:00.620880   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:00.620890   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:00.620897   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:00 GMT
	I0223 14:23:00.621287   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:00.621619   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:00.621626   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:00.621632   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:00.621637   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:00.624138   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:00.624148   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:00.624153   20778 round_trippers.go:580]     Audit-Id: bfa2a5fb-2ce3-4a30-a0b6-c839876a19a3
	I0223 14:23:00.624159   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:00.624164   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:00.624171   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:00.624177   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:00.624181   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:00 GMT
	I0223 14:23:00.624312   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:01.116501   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:01.116522   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:01.116534   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:01.116544   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:01.120076   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:01.120087   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:01.120093   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:01.120103   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:01.120109   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:01 GMT
	I0223 14:23:01.120114   20778 round_trippers.go:580]     Audit-Id: e973932f-7933-424c-b847-47909ecf17c8
	I0223 14:23:01.120119   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:01.120124   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:01.120204   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:01.120476   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:01.120482   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:01.120487   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:01.120493   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:01.122595   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:01.122605   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:01.122611   20778 round_trippers.go:580]     Audit-Id: 7fb05415-530f-4893-843e-84214d61a6ba
	I0223 14:23:01.122616   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:01.122621   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:01.122626   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:01.122633   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:01.122639   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:01 GMT
	I0223 14:23:01.122694   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:01.616518   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:01.616545   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:01.616558   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:01.616588   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:01.620565   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:01.620582   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:01.620590   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:01 GMT
	I0223 14:23:01.620597   20778 round_trippers.go:580]     Audit-Id: d1d6210b-433e-4a1a-bc9c-f7e88360446f
	I0223 14:23:01.620603   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:01.620611   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:01.620617   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:01.620630   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:01.620708   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:01.621014   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:01.621021   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:01.621027   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:01.621032   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:01.623254   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:01.623263   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:01.623269   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:01.623275   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:01.623281   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:01.623285   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:01 GMT
	I0223 14:23:01.623291   20778 round_trippers.go:580]     Audit-Id: 776e547a-b014-44cc-bcfa-75cfd6bcd88d
	I0223 14:23:01.623296   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:01.623348   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:01.623533   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:02.117176   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:02.117196   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:02.117208   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:02.117218   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:02.121190   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:02.121206   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:02.121214   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:02.121221   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:02.121228   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:02 GMT
	I0223 14:23:02.121235   20778 round_trippers.go:580]     Audit-Id: 93034275-ebc8-4d5b-9469-3775d80796e2
	I0223 14:23:02.121242   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:02.121248   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:02.121342   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:02.121639   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:02.121646   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:02.121654   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:02.121661   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:02.123801   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:02.123811   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:02.123817   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:02.123822   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:02 GMT
	I0223 14:23:02.123826   20778 round_trippers.go:580]     Audit-Id: 90f4eca5-6afc-4bc2-b987-2048a32e1711
	I0223 14:23:02.123831   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:02.123836   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:02.123840   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:02.124005   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:02.616338   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:02.616351   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:02.616358   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:02.616363   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:02.619284   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:02.619296   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:02.619303   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:02.619309   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:02.619314   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:02.619319   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:02 GMT
	I0223 14:23:02.619324   20778 round_trippers.go:580]     Audit-Id: ebadf671-d8a9-421e-a025-f38feaaa25f7
	I0223 14:23:02.619329   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:02.619423   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:02.619720   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:02.619727   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:02.619733   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:02.619741   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:02.621845   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:02.621856   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:02.621864   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:02 GMT
	I0223 14:23:02.621870   20778 round_trippers.go:580]     Audit-Id: 2579e22e-690e-4ebe-8b98-5ef9baab153a
	I0223 14:23:02.621880   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:02.621885   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:02.621890   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:02.621895   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:02.622181   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:03.116593   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:03.116607   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:03.116613   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:03.116619   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:03.119400   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:03.119412   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:03.119418   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:03.119422   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:03.119427   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:03.119432   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:03 GMT
	I0223 14:23:03.119438   20778 round_trippers.go:580]     Audit-Id: c40fb8e0-8a9c-4013-9441-26c3c5726c23
	I0223 14:23:03.119442   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:03.119504   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:03.119777   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:03.119783   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:03.119789   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:03.119795   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:03.121924   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:03.121934   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:03.121939   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:03 GMT
	I0223 14:23:03.121944   20778 round_trippers.go:580]     Audit-Id: 80e1b293-e280-4fa1-b7c1-54cffadaac26
	I0223 14:23:03.121949   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:03.121955   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:03.121960   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:03.121964   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:03.122160   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:03.616295   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:03.616308   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:03.616314   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:03.616319   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:03.665926   20778 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0223 14:23:03.665951   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:03.665963   20778 round_trippers.go:580]     Audit-Id: b00b7eda-0b90-4d3f-be92-30cc95ba7a30
	I0223 14:23:03.665973   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:03.665982   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:03.665991   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:03.666001   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:03.666011   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:03 GMT
	I0223 14:23:03.667392   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:03.667769   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:03.667778   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:03.667787   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:03.667798   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:03.670516   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:03.670528   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:03.670534   20778 round_trippers.go:580]     Audit-Id: f4e3b551-002a-4c6e-90e2-f228d8556662
	I0223 14:23:03.670538   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:03.670544   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:03.670549   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:03.670554   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:03.670558   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:03 GMT
	I0223 14:23:03.670641   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:03.670866   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:04.116550   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:04.116563   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:04.116570   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:04.116575   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:04.119043   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:04.119055   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:04.119061   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:04.119068   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:04.119073   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:04.119077   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:04 GMT
	I0223 14:23:04.119082   20778 round_trippers.go:580]     Audit-Id: 280237db-7115-4695-8818-15e643797b3f
	I0223 14:23:04.119088   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:04.119238   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:04.119533   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:04.119540   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:04.119545   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:04.119563   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:04.121777   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:04.121787   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:04.121794   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:04 GMT
	I0223 14:23:04.121801   20778 round_trippers.go:580]     Audit-Id: 0ad994bf-b3dc-4182-90b5-14e5b0791ee2
	I0223 14:23:04.121806   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:04.121811   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:04.121819   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:04.121823   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:04.122078   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:04.616299   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:04.616315   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:04.616322   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:04.616329   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:04.619170   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:04.619183   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:04.619189   20778 round_trippers.go:580]     Audit-Id: da4c45c0-eac5-4acd-acf2-b2c5e1bea699
	I0223 14:23:04.619195   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:04.619199   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:04.619204   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:04.619211   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:04.619221   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:04 GMT
	I0223 14:23:04.619311   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:04.619600   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:04.619608   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:04.619616   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:04.619624   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:04.623247   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:04.623261   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:04.623267   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:04.623272   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:04 GMT
	I0223 14:23:04.623277   20778 round_trippers.go:580]     Audit-Id: d9899891-ab02-453d-bf77-0c07b49ed368
	I0223 14:23:04.623281   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:04.623286   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:04.623293   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:04.623356   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"305","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:05.116395   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:05.116409   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:05.116416   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:05.116421   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:05.119323   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:05.119335   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:05.119347   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:05.119353   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:05 GMT
	I0223 14:23:05.119358   20778 round_trippers.go:580]     Audit-Id: ea15df90-12c7-41a6-9819-1a7e0d661048
	I0223 14:23:05.119363   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:05.119368   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:05.119373   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:05.119433   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:05.119714   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:05.119721   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:05.119727   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:05.119732   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:05.123537   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:05.123548   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:05.123554   20778 round_trippers.go:580]     Audit-Id: 4c0d5024-4521-462b-8e57-0878174b3c58
	I0223 14:23:05.123559   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:05.123565   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:05.123571   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:05.123577   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:05.123582   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:05 GMT
	I0223 14:23:05.123640   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:05.617165   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:05.617178   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:05.617184   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:05.617190   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:05.667196   20778 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0223 14:23:05.667220   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:05.667229   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:05 GMT
	I0223 14:23:05.667237   20778 round_trippers.go:580]     Audit-Id: e68ffa82-1ac6-437c-87d6-bd2d513155bd
	I0223 14:23:05.667244   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:05.667252   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:05.667259   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:05.667267   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:05.667815   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:05.668307   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:05.668316   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:05.668328   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:05.668336   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:05.670603   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:05.670634   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:05.670647   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:05.670658   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:05.670666   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:05.670673   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:05 GMT
	I0223 14:23:05.670683   20778 round_trippers.go:580]     Audit-Id: d1a4924d-0629-456b-ba4d-e98ee575d603
	I0223 14:23:05.670691   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:05.670927   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:05.671157   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:06.116435   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:06.116446   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:06.116453   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:06.116458   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:06.119403   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:06.119418   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:06.119425   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:06.119433   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:06.119440   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:06 GMT
	I0223 14:23:06.119445   20778 round_trippers.go:580]     Audit-Id: d838565a-236a-43ad-bf66-b30a8f4cbcf9
	I0223 14:23:06.119450   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:06.119455   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:06.119529   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:06.119813   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:06.119819   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:06.119825   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:06.119830   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:06.122032   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:06.122045   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:06.122054   20778 round_trippers.go:580]     Audit-Id: 0425c4a8-a4fb-4aca-86be-2f23c92ebaee
	I0223 14:23:06.122061   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:06.122070   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:06.122077   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:06.122086   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:06.122094   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:06 GMT
	I0223 14:23:06.122193   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:06.616360   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:06.616373   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:06.616387   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:06.616393   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:06.619341   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:06.619365   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:06.619380   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:06.619392   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:06.619401   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:06.619409   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:06.619416   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:06 GMT
	I0223 14:23:06.619424   20778 round_trippers.go:580]     Audit-Id: 5c60af4a-693a-4113-88bf-75b766692b45
	I0223 14:23:06.619498   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:06.619810   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:06.619818   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:06.619827   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:06.619835   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:06.622133   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:06.622144   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:06.622150   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:06.622156   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:06.622161   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:06.622166   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:06 GMT
	I0223 14:23:06.622174   20778 round_trippers.go:580]     Audit-Id: 7ead92d5-2ff4-4831-8772-ec872a778c2b
	I0223 14:23:06.622179   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:06.622236   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:07.116429   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:07.116443   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:07.116450   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:07.116455   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:07.119244   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:07.119258   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:07.119266   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:07.119273   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:07 GMT
	I0223 14:23:07.119280   20778 round_trippers.go:580]     Audit-Id: 4a62c324-bd03-417c-9db5-267cee771840
	I0223 14:23:07.119287   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:07.119301   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:07.119306   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:07.119372   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:07.119697   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:07.119706   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:07.119713   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:07.119721   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:07.165551   20778 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I0223 14:23:07.165629   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:07.165658   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:07.165673   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:07.165686   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:07.165701   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:07.165718   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:07 GMT
	I0223 14:23:07.165736   20778 round_trippers.go:580]     Audit-Id: f139e90a-75ff-4306-a60e-636a3ffc350a
	I0223 14:23:07.166302   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:07.616565   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:07.616579   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:07.616585   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:07.616590   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:07.619324   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:07.619337   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:07.619343   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:07.619347   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:07.619352   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:07 GMT
	I0223 14:23:07.619358   20778 round_trippers.go:580]     Audit-Id: 370427bd-2242-4d13-b1d3-e2b6aeac6a3a
	I0223 14:23:07.619367   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:07.619373   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:07.619438   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:07.619743   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:07.619750   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:07.619755   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:07.619761   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:07.622169   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:07.622179   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:07.622186   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:07.622191   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:07 GMT
	I0223 14:23:07.622196   20778 round_trippers.go:580]     Audit-Id: cbe275c9-1278-4b10-b457-37a543e7e7c2
	I0223 14:23:07.622201   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:07.622206   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:07.622211   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:07.622275   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:08.116354   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:08.116368   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:08.116375   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:08.116380   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:08.119183   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:08.119197   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:08.119203   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:08.119208   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:08.119232   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:08 GMT
	I0223 14:23:08.119242   20778 round_trippers.go:580]     Audit-Id: 66560798-fa67-40a1-a845-7c2d35d698b5
	I0223 14:23:08.119252   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:08.119265   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:08.119478   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:08.119797   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:08.119807   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:08.119813   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:08.119819   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:08.121743   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:08.121753   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:08.121758   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:08.121764   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:08.121769   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:08 GMT
	I0223 14:23:08.121774   20778 round_trippers.go:580]     Audit-Id: 095d3abe-2a48-459f-a792-60e1737ab6b3
	I0223 14:23:08.121779   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:08.121784   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:08.122004   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:08.122187   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:08.616462   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:08.616485   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:08.616500   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:08.616512   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:08.666460   20778 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0223 14:23:08.666481   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:08.666490   20778 round_trippers.go:580]     Audit-Id: 27fe5585-7dc8-47ac-8e6c-3103cbb13ed7
	I0223 14:23:08.666498   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:08.666504   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:08.666511   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:08.666518   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:08.666525   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:08 GMT
	I0223 14:23:08.666610   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:08.667019   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:08.667028   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:08.667037   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:08.667044   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:08.669393   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:08.669404   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:08.669410   20778 round_trippers.go:580]     Audit-Id: 4b0f8274-dcfa-4308-86d6-0bb74d08d915
	I0223 14:23:08.669417   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:08.669424   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:08.669429   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:08.669435   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:08.669440   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:08 GMT
	I0223 14:23:08.669526   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:09.116392   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:09.116405   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:09.116412   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:09.116417   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:09.118927   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:09.118944   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:09.118954   20778 round_trippers.go:580]     Audit-Id: 70c3a6b8-db42-4684-b76b-126f90f1e712
	I0223 14:23:09.118962   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:09.118970   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:09.118978   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:09.118986   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:09.118995   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:09 GMT
	I0223 14:23:09.119081   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:09.119441   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:09.119451   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:09.119459   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:09.119468   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:09.121680   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:09.121692   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:09.121697   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:09.121702   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:09.121707   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:09.121712   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:09.121717   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:09 GMT
	I0223 14:23:09.121722   20778 round_trippers.go:580]     Audit-Id: e8a8d03c-6e15-468a-ad56-fe6c5117c791
	I0223 14:23:09.121789   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:09.616525   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:09.616542   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:09.616551   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:09.616557   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:09.619434   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:09.619448   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:09.619454   20778 round_trippers.go:580]     Audit-Id: 5edf01ec-ead8-43aa-9f70-0d35e6776027
	I0223 14:23:09.619459   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:09.619464   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:09.619469   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:09.619474   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:09.619479   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:09 GMT
	I0223 14:23:09.619541   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:09.619825   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:09.619832   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:09.619838   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:09.619843   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:09.622600   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:09.622612   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:09.622617   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:09 GMT
	I0223 14:23:09.622623   20778 round_trippers.go:580]     Audit-Id: f5b8c9d1-8582-43b6-b869-264e811e523c
	I0223 14:23:09.622628   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:09.622633   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:09.622638   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:09.622643   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:09.622709   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:10.116458   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:10.116473   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:10.116482   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:10.116487   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:10.119325   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:10.119341   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:10.119347   20778 round_trippers.go:580]     Audit-Id: d780daa1-7868-45ae-b119-5f3e7cf50343
	I0223 14:23:10.119354   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:10.119362   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:10.119369   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:10.119375   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:10.119379   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:10 GMT
	I0223 14:23:10.119452   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:10.119796   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:10.119804   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:10.119813   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:10.119824   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:10.165977   20778 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0223 14:23:10.166000   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:10.166013   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:10.166025   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:10 GMT
	I0223 14:23:10.166037   20778 round_trippers.go:580]     Audit-Id: 2874327a-f156-4571-9f37-eb70f00579a0
	I0223 14:23:10.166049   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:10.166064   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:10.166072   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:10.166622   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:10.166909   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:10.616523   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:10.616544   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:10.616555   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:10.616564   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:10.619353   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:10.619371   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:10.619378   20778 round_trippers.go:580]     Audit-Id: 65dd7960-ac84-40a8-88f9-1d60afa0ba00
	I0223 14:23:10.619383   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:10.619388   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:10.619392   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:10.619402   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:10.619408   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:10 GMT
	I0223 14:23:10.619482   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:10.619773   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:10.619780   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:10.619786   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:10.619791   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:10.622062   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:10.622074   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:10.622080   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:10.622085   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:10.622093   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:10.622099   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:10.622104   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:10 GMT
	I0223 14:23:10.622116   20778 round_trippers.go:580]     Audit-Id: e4101d3a-621c-47da-be72-77083957d3a0
	I0223 14:23:10.622182   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:11.116630   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:11.116645   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:11.116651   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:11.116657   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:11.119186   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:11.119200   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:11.119206   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:11.119224   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:11 GMT
	I0223 14:23:11.119233   20778 round_trippers.go:580]     Audit-Id: 13abcccd-fb85-43fd-b87e-c37ad2f3438e
	I0223 14:23:11.119239   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:11.119244   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:11.119249   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:11.119316   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:11.119629   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:11.119636   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:11.119642   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:11.119647   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:11.122137   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:11.122147   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:11.122153   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:11.122158   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:11.122513   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:11.122622   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:11.122646   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:11 GMT
	I0223 14:23:11.122661   20778 round_trippers.go:580]     Audit-Id: fd677e5c-b452-4cd5-ae1c-4e68e5800f26
	I0223 14:23:11.122862   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:11.616364   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:11.616387   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:11.616440   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:11.616446   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:11.618929   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:11.618942   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:11.618949   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:11 GMT
	I0223 14:23:11.618955   20778 round_trippers.go:580]     Audit-Id: 0b2270e7-7676-4266-9714-b00b780bc78e
	I0223 14:23:11.618962   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:11.618967   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:11.618971   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:11.618976   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:11.619614   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"390","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 14:23:11.619896   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:11.619903   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:11.619908   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:11.619914   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:11.622533   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:11.622545   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:11.622551   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:11.622556   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:11.622562   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:11 GMT
	I0223 14:23:11.622567   20778 round_trippers.go:580]     Audit-Id: bf3875ef-669f-4f72-8240-3f6b0e99837c
	I0223 14:23:11.622572   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:11.622577   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:11.622631   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:12.116936   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:12.117020   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:12.117040   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:12.117053   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:12.120560   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:12.120576   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:12.120583   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:12.120591   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:12.120596   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:12 GMT
	I0223 14:23:12.120601   20778 round_trippers.go:580]     Audit-Id: 7f1d8a00-63f2-4857-bccb-0b357984111b
	I0223 14:23:12.120606   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:12.120611   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:12.120688   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"422","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6380 chars]
	I0223 14:23:12.120995   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:12.121002   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:12.121011   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:12.121017   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:12.123432   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:12.123442   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:12.123448   20778 round_trippers.go:580]     Audit-Id: e3d3655d-6172-4022-aace-d9e9f64dfcc2
	I0223 14:23:12.123453   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:12.123472   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:12.123480   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:12.123487   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:12.123492   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:12 GMT
	I0223 14:23:12.123599   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:12.617816   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:12.617844   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:12.617856   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:12.617959   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:12.622252   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:12.622267   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:12.622284   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:12.622293   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:12.622300   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:12.622307   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:12.622314   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:12 GMT
	I0223 14:23:12.622334   20778 round_trippers.go:580]     Audit-Id: 6752de30-b0aa-4111-85c2-b59c7163ef95
	I0223 14:23:12.622386   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"422","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6380 chars]
	I0223 14:23:12.622670   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:12.622677   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:12.622683   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:12.622688   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:12.624742   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:12.624751   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:12.624756   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:12 GMT
	I0223 14:23:12.624761   20778 round_trippers.go:580]     Audit-Id: fe7e2bf0-d664-4f37-be19-9723fb1889de
	I0223 14:23:12.624767   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:12.624773   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:12.624779   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:12.624784   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:12.624832   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:12.625000   20778 pod_ready.go:102] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:13.117273   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:13.117287   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.117293   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.117298   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.120073   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.120085   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.120091   20778 round_trippers.go:580]     Audit-Id: cd7766ab-f79c-443f-aa25-e0d0837eb615
	I0223 14:23:13.120095   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.120100   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.120105   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.120110   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.120115   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.120173   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 14:23:13.120441   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.120450   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.120456   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.120462   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.123094   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.123102   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.123107   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.123112   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.123117   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.123122   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.123127   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.123132   20778 round_trippers.go:580]     Audit-Id: 92812056-cccc-4717-a569-e767beaa3385
	I0223 14:23:13.123186   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.123361   20778 pod_ready.go:92] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.123372   20778 pod_ready.go:81] duration metric: took 16.013355024s waiting for pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.123378   20778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-4rfn2" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.123409   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4rfn2
	I0223 14:23:13.123414   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.123419   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.123424   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.125347   20778 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0223 14:23:13.125357   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.125363   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.125368   20778 round_trippers.go:580]     Audit-Id: edb4ec90-4c11-4174-865c-82edf5962970
	I0223 14:23:13.125374   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.125381   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.125387   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.125391   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.125396   20778 round_trippers.go:580]     Content-Length: 216
	I0223 14:23:13.125407   20778 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-4rfn2\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-4rfn2","kind":"pods"},"code":404}
	I0223 14:23:13.125513   20778 pod_ready.go:97] error getting pod "coredns-787d4945fb-4rfn2" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-4rfn2" not found
	I0223 14:23:13.125521   20778 pod_ready.go:81] duration metric: took 2.137024ms waiting for pod "coredns-787d4945fb-4rfn2" in "kube-system" namespace to be "Ready" ...
	E0223 14:23:13.125526   20778 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-4rfn2" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-4rfn2" not found
	I0223 14:23:13.125531   20778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.125559   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/etcd-multinode-359000
	I0223 14:23:13.125564   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.125569   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.125575   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.127741   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.127750   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.127756   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.127761   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.127767   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.127771   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.127777   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.127782   20778 round_trippers.go:580]     Audit-Id: a6643dbc-1e4d-47c2-922e-c591fd2e9585
	I0223 14:23:13.127855   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-359000","namespace":"kube-system","uid":"398e38cc-24ea-4f91-8b62-51681eb997b4","resourceVersion":"295","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"93ed633257d1dccd5f056f259fe5ad92","kubernetes.io/config.mirror":"93ed633257d1dccd5f056f259fe5ad92","kubernetes.io/config.seen":"2023-02-23T22:22:43.384430470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 14:23:13.128073   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.128079   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.128085   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.128090   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.129969   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:13.129979   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.129984   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.129990   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.129996   20778 round_trippers.go:580]     Audit-Id: c252ef2b-0991-418d-b495-f380d2c313b6
	I0223 14:23:13.130001   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.130006   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.130011   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.130066   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.130244   20778 pod_ready.go:92] pod "etcd-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.130249   20778 pod_ready.go:81] duration metric: took 4.713845ms waiting for pod "etcd-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.130256   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.130284   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-359000
	I0223 14:23:13.130288   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.130296   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.130303   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.132448   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.132457   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.132462   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.132467   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.132472   20778 round_trippers.go:580]     Audit-Id: bfa3b467-1501-4dcb-acae-7c8e8a32468f
	I0223 14:23:13.132478   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.132482   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.132488   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.132550   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-359000","namespace":"kube-system","uid":"39b152d9-2735-457b-a3a1-5e7aca7dc8f3","resourceVersion":"264","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"cfb3605b4e0ab2e0442f07f281676240","kubernetes.io/config.mirror":"cfb3605b4e0ab2e0442f07f281676240","kubernetes.io/config.seen":"2023-02-23T22:22:43.384450086Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 14:23:13.132800   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.132805   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.132811   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.132816   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.134907   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.134916   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.134921   20778 round_trippers.go:580]     Audit-Id: 2c875bb2-25c3-4dc0-aef3-6268b7a58989
	I0223 14:23:13.134927   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.134933   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.134938   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.134943   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.134948   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.134994   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.135158   20778 pod_ready.go:92] pod "kube-apiserver-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.135163   20778 pod_ready.go:81] duration metric: took 4.903109ms waiting for pod "kube-apiserver-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.135168   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.135193   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-359000
	I0223 14:23:13.135198   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.135204   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.135209   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.137058   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:13.137067   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.137073   20778 round_trippers.go:580]     Audit-Id: e18cf633-434b-4ddc-9aa8-e86db08f416b
	I0223 14:23:13.137078   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.137084   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.137092   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.137097   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.137102   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.137170   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-359000","namespace":"kube-system","uid":"361170a2-c3b3-4be5-95ca-334b3b892a82","resourceVersion":"267","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2d2ed3414aeb862284d35d22f8aea7e3","kubernetes.io/config.mirror":"2d2ed3414aeb862284d35d22f8aea7e3","kubernetes.io/config.seen":"2023-02-23T22:22:43.384451227Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 14:23:13.137419   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.137425   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.137431   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.137436   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.139685   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:13.139696   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.139702   20778 round_trippers.go:580]     Audit-Id: cb420e48-6de6-4c76-bb5f-77332cebb38a
	I0223 14:23:13.139707   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.139713   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.139718   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.139723   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.139728   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.139788   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.139981   20778 pod_ready.go:92] pod "kube-controller-manager-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.139987   20778 pod_ready.go:81] duration metric: took 4.814281ms waiting for pod "kube-controller-manager-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.139992   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lkkx4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.140022   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-lkkx4
	I0223 14:23:13.140027   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.140034   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.140041   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.141993   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:13.142002   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.142008   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.142013   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.142018   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.142024   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.142029   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.142035   20778 round_trippers.go:580]     Audit-Id: 101c32ef-b444-44e2-9126-50cdd0b847d5
	I0223 14:23:13.142252   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lkkx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"42230635-8bb5-4f57-b543-5ddbeada143a","resourceVersion":"392","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 14:23:13.317781   20778 request.go:622] Waited for 175.206143ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.317834   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.317844   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.317855   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.317875   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.321915   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:13.321926   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.321931   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.321937   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.321942   20778 round_trippers.go:580]     Audit-Id: ebf93ff4-c742-4fe6-9169-0321f3e6713e
	I0223 14:23:13.321948   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.321952   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.321958   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.322013   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.322200   20778 pod_ready.go:92] pod "kube-proxy-lkkx4" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.322206   20778 pod_ready.go:81] duration metric: took 182.208215ms waiting for pod "kube-proxy-lkkx4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.322211   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.517861   20778 request.go:622] Waited for 195.513883ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-359000
	I0223 14:23:13.517908   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-359000
	I0223 14:23:13.517916   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.517942   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.517955   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.522391   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:13.522412   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.522421   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.522430   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.522437   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.522444   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.522451   20778 round_trippers.go:580]     Audit-Id: 6d7b42ab-d9e7-4560-88e9-28babffd876a
	I0223 14:23:13.522472   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.522527   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-359000","namespace":"kube-system","uid":"525e88fd-a6fc-470a-a99a-6ceede2058e5","resourceVersion":"291","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"68ba80c02e331ad063843d01029c90d4","kubernetes.io/config.mirror":"68ba80c02e331ad063843d01029c90d4","kubernetes.io/config.seen":"2023-02-23T22:22:43.384451945Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 14:23:13.718151   20778 request.go:622] Waited for 195.325489ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.718269   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:13.718280   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.718291   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.718303   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.722908   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:13.722921   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.722927   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.722932   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.722936   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.722941   20778 round_trippers.go:580]     Audit-Id: 6dd939fa-9119-4f45-a82f-21d4c06a38a8
	I0223 14:23:13.722946   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.722950   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.723012   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 4952 chars]
	I0223 14:23:13.723204   20778 pod_ready.go:92] pod "kube-scheduler-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:13.723212   20778 pod_ready.go:81] duration metric: took 400.992668ms waiting for pod "kube-scheduler-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:13.723219   20778 pod_ready.go:38] duration metric: took 16.62070692s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 14:23:13.723233   20778 api_server.go:51] waiting for apiserver process to appear ...
	I0223 14:23:13.723289   20778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:23:13.732561   20778 command_runner.go:130] > 2006
	I0223 14:23:13.733202   20778 api_server.go:71] duration metric: took 17.258297848s to wait for apiserver process to appear ...
	I0223 14:23:13.733213   20778 api_server.go:87] waiting for apiserver healthz status ...
	I0223 14:23:13.733224   20778 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58734/healthz ...
	I0223 14:23:13.738817   20778 api_server.go:278] https://127.0.0.1:58734/healthz returned 200:
	ok
	I0223 14:23:13.738855   20778 round_trippers.go:463] GET https://127.0.0.1:58734/version
	I0223 14:23:13.738861   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.738870   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.738876   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.740041   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:13.740052   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.740058   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.740064   20778 round_trippers.go:580]     Content-Length: 263
	I0223 14:23:13.740069   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.740074   20778 round_trippers.go:580]     Audit-Id: 5a08e647-55cb-40c3-83ee-83b9a1a18305
	I0223 14:23:13.740079   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.740084   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.740096   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.740106   20778 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 14:23:13.740148   20778 api_server.go:140] control plane version: v1.26.1
	I0223 14:23:13.740156   20778 api_server.go:130] duration metric: took 6.939102ms to wait for apiserver health ...
	I0223 14:23:13.740160   20778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 14:23:13.917857   20778 request.go:622] Waited for 177.652884ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:13.917997   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:13.918009   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:13.918025   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:13.918036   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:13.922221   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:13.922236   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:13.922244   20778 round_trippers.go:580]     Audit-Id: 46cf40f5-a212-4f6a-9544-db09a5453ef2
	I0223 14:23:13.922251   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:13.922258   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:13.922264   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:13.922283   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:13.922295   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:13 GMT
	I0223 14:23:13.923644   20778 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 14:23:13.924906   20778 system_pods.go:59] 8 kube-system pods found
	I0223 14:23:13.924916   20778 system_pods.go:61] "coredns-787d4945fb-4hj2n" [034c3c0c-5eec-4b91-9daf-1317dc6af725] Running
	I0223 14:23:13.924920   20778 system_pods.go:61] "etcd-multinode-359000" [398e38cc-24ea-4f91-8b62-51681eb997b4] Running
	I0223 14:23:13.924926   20778 system_pods.go:61] "kindnet-8hs9x" [89d966b4-fbe8-4c74-83f5-ae4a97ceebc0] Running
	I0223 14:23:13.924931   20778 system_pods.go:61] "kube-apiserver-multinode-359000" [39b152d9-2735-457b-a3a1-5e7aca7dc8f3] Running
	I0223 14:23:13.924934   20778 system_pods.go:61] "kube-controller-manager-multinode-359000" [361170a2-c3b3-4be5-95ca-334b3b892a82] Running
	I0223 14:23:13.924939   20778 system_pods.go:61] "kube-proxy-lkkx4" [42230635-8bb5-4f57-b543-5ddbeada143a] Running
	I0223 14:23:13.924942   20778 system_pods.go:61] "kube-scheduler-multinode-359000" [525e88fd-a6fc-470a-a99a-6ceede2058e5] Running
	I0223 14:23:13.924947   20778 system_pods.go:61] "storage-provisioner" [8f927b9f-d9b7-4b15-9905-e816d50c40bc] Running
	I0223 14:23:13.924952   20778 system_pods.go:74] duration metric: took 184.786418ms to wait for pod list to return data ...
	I0223 14:23:13.924958   20778 default_sa.go:34] waiting for default service account to be created ...
	I0223 14:23:14.119164   20778 request.go:622] Waited for 194.162057ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/default/serviceaccounts
	I0223 14:23:14.119214   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/default/serviceaccounts
	I0223 14:23:14.119223   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:14.119235   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:14.119249   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:14.123296   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:14.123313   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:14.123321   20778 round_trippers.go:580]     Content-Length: 261
	I0223 14:23:14.123329   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:14 GMT
	I0223 14:23:14.123337   20778 round_trippers.go:580]     Audit-Id: c7434e29-d38e-4305-89aa-ba01c2e3b085
	I0223 14:23:14.123346   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:14.123356   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:14.123364   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:14.123371   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:14.123385   20778 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"42d0e8e3-00f2-4fab-8d31-6ec487897d7d","resourceVersion":"330","creationTimestamp":"2023-02-23T22:22:55Z"}}]}
	I0223 14:23:14.123505   20778 default_sa.go:45] found service account: "default"
	I0223 14:23:14.123512   20778 default_sa.go:55] duration metric: took 198.547913ms for default service account to be created ...
	I0223 14:23:14.123519   20778 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 14:23:14.317628   20778 request.go:622] Waited for 193.923439ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:14.317693   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:14.317703   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:14.317720   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:14.317731   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:14.323007   20778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 14:23:14.323020   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:14.323026   20778 round_trippers.go:580]     Audit-Id: b2d88587-a358-4bfd-a6df-b5403cb46da4
	I0223 14:23:14.323031   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:14.323036   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:14.323049   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:14.323055   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:14.323060   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:14 GMT
	I0223 14:23:14.323419   20778 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 14:23:14.324675   20778 system_pods.go:86] 8 kube-system pods found
	I0223 14:23:14.324684   20778 system_pods.go:89] "coredns-787d4945fb-4hj2n" [034c3c0c-5eec-4b91-9daf-1317dc6af725] Running
	I0223 14:23:14.324688   20778 system_pods.go:89] "etcd-multinode-359000" [398e38cc-24ea-4f91-8b62-51681eb997b4] Running
	I0223 14:23:14.324692   20778 system_pods.go:89] "kindnet-8hs9x" [89d966b4-fbe8-4c74-83f5-ae4a97ceebc0] Running
	I0223 14:23:14.324696   20778 system_pods.go:89] "kube-apiserver-multinode-359000" [39b152d9-2735-457b-a3a1-5e7aca7dc8f3] Running
	I0223 14:23:14.324700   20778 system_pods.go:89] "kube-controller-manager-multinode-359000" [361170a2-c3b3-4be5-95ca-334b3b892a82] Running
	I0223 14:23:14.324704   20778 system_pods.go:89] "kube-proxy-lkkx4" [42230635-8bb5-4f57-b543-5ddbeada143a] Running
	I0223 14:23:14.324708   20778 system_pods.go:89] "kube-scheduler-multinode-359000" [525e88fd-a6fc-470a-a99a-6ceede2058e5] Running
	I0223 14:23:14.324711   20778 system_pods.go:89] "storage-provisioner" [8f927b9f-d9b7-4b15-9905-e816d50c40bc] Running
	I0223 14:23:14.324716   20778 system_pods.go:126] duration metric: took 201.192484ms to wait for k8s-apps to be running ...
	I0223 14:23:14.324722   20778 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 14:23:14.324779   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:23:14.334751   20778 system_svc.go:56] duration metric: took 10.024824ms WaitForService to wait for kubelet.
	I0223 14:23:14.334763   20778 kubeadm.go:578] duration metric: took 17.859856382s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 14:23:14.334775   20778 node_conditions.go:102] verifying NodePressure condition ...
	I0223 14:23:14.517411   20778 request.go:622] Waited for 182.47484ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes
	I0223 14:23:14.517484   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes
	I0223 14:23:14.517495   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:14.517507   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:14.517520   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:14.521115   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:14.521126   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:14.521131   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:14.521136   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:14.521141   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:14.521146   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:14.521151   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:14 GMT
	I0223 14:23:14.521156   20778 round_trippers.go:580]     Audit-Id: 2b3d52bd-0fd2-4024-9519-3bd516a2549c
	I0223 14:23:14.521223   20778 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"407","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5005 chars]
	I0223 14:23:14.521451   20778 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 14:23:14.521463   20778 node_conditions.go:123] node cpu capacity is 6
	I0223 14:23:14.521474   20778 node_conditions.go:105] duration metric: took 186.695174ms to run NodePressure ...
	I0223 14:23:14.521484   20778 start.go:228] waiting for startup goroutines ...
	I0223 14:23:14.521490   20778 start.go:233] waiting for cluster config update ...
	I0223 14:23:14.521499   20778 start.go:242] writing updated cluster config ...
	I0223 14:23:14.543498   20778 out.go:177] 
	I0223 14:23:14.565360   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:23:14.565473   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:23:14.588078   20778 out.go:177] * Starting worker node multinode-359000-m02 in cluster multinode-359000
	I0223 14:23:14.630002   20778 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:23:14.651091   20778 out.go:177] * Pulling base image ...
	I0223 14:23:14.692910   20778 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:23:14.692895   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:23:14.692965   20778 cache.go:57] Caching tarball of preloaded images
	I0223 14:23:14.693163   20778 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 14:23:14.693187   20778 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 14:23:14.693314   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:23:14.749919   20778 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:23:14.749941   20778 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:23:14.749960   20778 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:23:14.749991   20778 start.go:364] acquiring machines lock for multinode-359000-m02: {Name:mk57942f9b35fbc6d6218dbab8bb92a2c747748c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:23:14.750150   20778 start.go:368] acquired machines lock for "multinode-359000-m02" in 147.868µs
	I0223 14:23:14.750175   20778 start.go:93] Provisioning new machine with config: &{Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 14:23:14.750235   20778 start.go:125] createHost starting for "m02" (driver="docker")
	I0223 14:23:14.771991   20778 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 14:23:14.772252   20778 start.go:159] libmachine.API.Create for "multinode-359000" (driver="docker")
	I0223 14:23:14.772294   20778 client.go:168] LocalClient.Create starting
	I0223 14:23:14.772503   20778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem
	I0223 14:23:14.772606   20778 main.go:141] libmachine: Decoding PEM data...
	I0223 14:23:14.772635   20778 main.go:141] libmachine: Parsing certificate...
	I0223 14:23:14.772746   20778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem
	I0223 14:23:14.772812   20778 main.go:141] libmachine: Decoding PEM data...
	I0223 14:23:14.772838   20778 main.go:141] libmachine: Parsing certificate...
	I0223 14:23:14.794209   20778 cli_runner.go:164] Run: docker network inspect multinode-359000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 14:23:14.852419   20778 network_create.go:76] Found existing network {name:multinode-359000 subnet:0xc000f13b00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0223 14:23:14.852464   20778 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-359000-m02" container
	I0223 14:23:14.852591   20778 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 14:23:14.908120   20778 cli_runner.go:164] Run: docker volume create multinode-359000-m02 --label name.minikube.sigs.k8s.io=multinode-359000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0223 14:23:14.963389   20778 oci.go:103] Successfully created a docker volume multinode-359000-m02
	I0223 14:23:14.963524   20778 cli_runner.go:164] Run: docker run --rm --name multinode-359000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-359000-m02 --entrypoint /usr/bin/test -v multinode-359000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 14:23:15.400865   20778 oci.go:107] Successfully prepared a docker volume multinode-359000-m02
	I0223 14:23:15.400905   20778 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:23:15.400918   20778 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 14:23:15.401047   20778 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-359000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 14:23:21.697910   20778 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-359000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.296748074s)
	I0223 14:23:21.697931   20778 kic.go:199] duration metric: took 6.296975 seconds to extract preloaded images to volume
	I0223 14:23:21.698055   20778 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 14:23:21.842765   20778 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-359000-m02 --name multinode-359000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-359000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-359000-m02 --network multinode-359000 --ip 192.168.58.3 --volume multinode-359000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 14:23:22.193728   20778 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Running}}
	I0223 14:23:22.254754   20778 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Status}}
	I0223 14:23:22.320121   20778 cli_runner.go:164] Run: docker exec multinode-359000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0223 14:23:22.436970   20778 oci.go:144] the created container "multinode-359000-m02" has a running status.
	I0223 14:23:22.437001   20778 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa...
	I0223 14:23:22.627292   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 14:23:22.627356   20778 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 14:23:22.731706   20778 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Status}}
	I0223 14:23:22.788009   20778 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 14:23:22.788029   20778 kic_runner.go:114] Args: [docker exec --privileged multinode-359000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 14:23:22.896020   20778 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Status}}
	I0223 14:23:22.952931   20778 machine.go:88] provisioning docker machine ...
	I0223 14:23:22.952963   20778 ubuntu.go:169] provisioning hostname "multinode-359000-m02"
	I0223 14:23:22.953077   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:23.041907   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:23.042298   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:23.042308   20778 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-359000-m02 && echo "multinode-359000-m02" | sudo tee /etc/hostname
	I0223 14:23:23.183047   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-359000-m02
	
	I0223 14:23:23.183137   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:23.240966   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:23.241318   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:23.241331   20778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-359000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-359000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-359000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:23:23.375447   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:23:23.375465   20778 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:23:23.375473   20778 ubuntu.go:177] setting up certificates
	I0223 14:23:23.375478   20778 provision.go:83] configureAuth start
	I0223 14:23:23.375560   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000-m02
	I0223 14:23:23.432646   20778 provision.go:138] copyHostCerts
	I0223 14:23:23.432692   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:23:23.432752   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:23:23.432764   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:23:23.432885   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:23:23.433057   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:23:23.433101   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:23:23.433106   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:23:23.433170   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:23:23.433288   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:23:23.433326   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:23:23.433331   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:23:23.433395   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:23:23.433523   20778 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.multinode-359000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-359000-m02]
	I0223 14:23:23.713118   20778 provision.go:172] copyRemoteCerts
	I0223 14:23:23.713177   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:23:23.713229   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:23.771686   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:23.867004   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 14:23:23.867085   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:23:23.884742   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 14:23:23.884832   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 14:23:23.902183   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 14:23:23.902269   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 14:23:23.919452   20778 provision.go:86] duration metric: configureAuth took 543.954001ms
	I0223 14:23:23.919467   20778 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:23:23.919636   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:23:23.919712   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:23.977779   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:23.978130   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:23.978141   20778 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:23:24.110170   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:23:24.110186   20778 ubuntu.go:71] root file system type: overlay
	I0223 14:23:24.110276   20778 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:23:24.110354   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:24.169070   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:24.169434   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:24.169492   20778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:23:24.313183   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:23:24.313276   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:24.371727   20778 main.go:141] libmachine: Using SSH client type: native
	I0223 14:23:24.372083   20778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58798 <nil> <nil>}
	I0223 14:23:24.372098   20778 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:23:24.988788   20778 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:23:24.311424992 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 14:23:24.988811   20778 machine.go:91] provisioned docker machine in 2.03584995s
	I0223 14:23:24.988817   20778 client.go:171] LocalClient.Create took 10.216456615s
	I0223 14:23:24.988835   20778 start.go:167] duration metric: libmachine.API.Create for "multinode-359000" took 10.216529206s
	I0223 14:23:24.988841   20778 start.go:300] post-start starting for "multinode-359000-m02" (driver="docker")
	I0223 14:23:24.988845   20778 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:23:24.988930   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:23:24.988986   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:25.047811   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:25.143051   20778 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:23:25.146589   20778 command_runner.go:130] > NAME="Ubuntu"
	I0223 14:23:25.146598   20778 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 14:23:25.146604   20778 command_runner.go:130] > ID=ubuntu
	I0223 14:23:25.146626   20778 command_runner.go:130] > ID_LIKE=debian
	I0223 14:23:25.146636   20778 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 14:23:25.146641   20778 command_runner.go:130] > VERSION_ID="20.04"
	I0223 14:23:25.146648   20778 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 14:23:25.146653   20778 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 14:23:25.146657   20778 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 14:23:25.146668   20778 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 14:23:25.146672   20778 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 14:23:25.146676   20778 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 14:23:25.146738   20778 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:23:25.146750   20778 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:23:25.146756   20778 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:23:25.146761   20778 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:23:25.146766   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:23:25.146871   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:23:25.147029   20778 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:23:25.147035   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /etc/ssl/certs/152102.pem
	I0223 14:23:25.147207   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:23:25.154400   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:23:25.171775   20778 start.go:303] post-start completed in 182.925693ms
	I0223 14:23:25.172298   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000-m02
	I0223 14:23:25.231050   20778 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/config.json ...
	I0223 14:23:25.231476   20778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:23:25.231532   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:25.289247   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:25.381822   20778 command_runner.go:130] > 11%!
	(MISSING)I0223 14:23:25.381911   20778 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:23:25.386175   20778 command_runner.go:130] > 50G
	I0223 14:23:25.386476   20778 start.go:128] duration metric: createHost completed in 10.63617443s
	I0223 14:23:25.386487   20778 start.go:83] releasing machines lock for "multinode-359000-m02", held for 10.636270241s
	I0223 14:23:25.386578   20778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000-m02
	I0223 14:23:25.468040   20778 out.go:177] * Found network options:
	I0223 14:23:25.490081   20778 out.go:177]   - NO_PROXY=192.168.58.2
	W0223 14:23:25.511216   20778 proxy.go:119] fail to check proxy env: Error ip not in block
	W0223 14:23:25.511276   20778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 14:23:25.511426   20778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 14:23:25.511490   20778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 14:23:25.511534   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:25.511620   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:23:25.573252   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:25.574739   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:23:25.715181   20778 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 14:23:25.715228   20778 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 14:23:25.715244   20778 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 14:23:25.715250   20778 command_runner.go:130] > Device: 10001bh/1048603d	Inode: 269040      Links: 1
	I0223 14:23:25.715255   20778 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 14:23:25.715263   20778 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 14:23:25.715267   20778 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 14:23:25.715271   20778 command_runner.go:130] > Change: 2023-02-23 21:59:23.933961994 +0000
	I0223 14:23:25.715275   20778 command_runner.go:130] >  Birth: -
	I0223 14:23:25.715367   20778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 14:23:25.735908   20778 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 14:23:25.735988   20778 ssh_runner.go:195] Run: which cri-dockerd
	I0223 14:23:25.739555   20778 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 14:23:25.739774   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 14:23:25.747366   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 14:23:25.759967   20778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 14:23:25.774472   20778 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 14:23:25.774507   20778 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 14:23:25.774516   20778 start.go:485] detecting cgroup driver to use...
	I0223 14:23:25.774530   20778 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:23:25.774612   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:23:25.787192   20778 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 14:23:25.787204   20778 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 14:23:25.787956   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 14:23:25.796437   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:23:25.804928   20778 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:23:25.804991   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:23:25.813830   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:23:25.822755   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:23:25.831428   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:23:25.839842   20778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:23:25.847596   20778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:23:25.855973   20778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:23:25.862383   20778 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 14:23:25.862951   20778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:23:25.870200   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:23:25.938701   20778 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:23:26.013140   20778 start.go:485] detecting cgroup driver to use...
	I0223 14:23:26.013160   20778 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:23:26.013226   20778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:23:26.022626   20778 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 14:23:26.022719   20778 command_runner.go:130] > [Unit]
	I0223 14:23:26.022729   20778 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 14:23:26.022734   20778 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 14:23:26.022738   20778 command_runner.go:130] > BindsTo=containerd.service
	I0223 14:23:26.022743   20778 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 14:23:26.022746   20778 command_runner.go:130] > Wants=network-online.target
	I0223 14:23:26.022750   20778 command_runner.go:130] > Requires=docker.socket
	I0223 14:23:26.022755   20778 command_runner.go:130] > StartLimitBurst=3
	I0223 14:23:26.022759   20778 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 14:23:26.022766   20778 command_runner.go:130] > [Service]
	I0223 14:23:26.022771   20778 command_runner.go:130] > Type=notify
	I0223 14:23:26.022774   20778 command_runner.go:130] > Restart=on-failure
	I0223 14:23:26.022778   20778 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0223 14:23:26.022783   20778 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 14:23:26.022792   20778 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 14:23:26.022797   20778 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 14:23:26.022802   20778 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 14:23:26.022808   20778 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 14:23:26.022815   20778 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 14:23:26.022820   20778 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 14:23:26.022834   20778 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 14:23:26.022841   20778 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 14:23:26.022844   20778 command_runner.go:130] > ExecStart=
	I0223 14:23:26.022862   20778 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 14:23:26.022867   20778 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 14:23:26.022872   20778 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 14:23:26.022878   20778 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 14:23:26.022883   20778 command_runner.go:130] > LimitNOFILE=infinity
	I0223 14:23:26.022887   20778 command_runner.go:130] > LimitNPROC=infinity
	I0223 14:23:26.022890   20778 command_runner.go:130] > LimitCORE=infinity
	I0223 14:23:26.022895   20778 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 14:23:26.022899   20778 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 14:23:26.022904   20778 command_runner.go:130] > TasksMax=infinity
	I0223 14:23:26.022908   20778 command_runner.go:130] > TimeoutStartSec=0
	I0223 14:23:26.022913   20778 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 14:23:26.022916   20778 command_runner.go:130] > Delegate=yes
	I0223 14:23:26.022925   20778 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 14:23:26.022929   20778 command_runner.go:130] > KillMode=process
	I0223 14:23:26.022932   20778 command_runner.go:130] > [Install]
	I0223 14:23:26.022936   20778 command_runner.go:130] > WantedBy=multi-user.target
	I0223 14:23:26.023526   20778 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:23:26.023608   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:23:26.033809   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:23:26.047197   20778 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 14:23:26.047211   20778 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 14:23:26.048065   20778 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:23:26.126213   20778 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:23:26.204125   20778 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:23:26.204143   20778 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:23:26.218985   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:23:26.308879   20778 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:23:26.534777   20778 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:23:26.609689   20778 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 14:23:26.609769   20778 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 14:23:26.676857   20778 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:23:26.748140   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:23:26.824836   20778 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 14:23:26.844166   20778 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 14:23:26.844261   20778 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 14:23:26.848292   20778 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 14:23:26.848303   20778 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 14:23:26.848310   20778 command_runner.go:130] > Device: 100023h/1048611d	Inode: 206         Links: 1
	I0223 14:23:26.848318   20778 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 14:23:26.848325   20778 command_runner.go:130] > Access: 2023-02-23 22:23:26.832424968 +0000
	I0223 14:23:26.848330   20778 command_runner.go:130] > Modify: 2023-02-23 22:23:26.832424968 +0000
	I0223 14:23:26.848336   20778 command_runner.go:130] > Change: 2023-02-23 22:23:26.841424968 +0000
	I0223 14:23:26.848341   20778 command_runner.go:130] >  Birth: -
	I0223 14:23:26.848431   20778 start.go:553] Will wait 60s for crictl version
	I0223 14:23:26.848473   20778 ssh_runner.go:195] Run: which crictl
	I0223 14:23:26.852030   20778 command_runner.go:130] > /usr/bin/crictl
	I0223 14:23:26.852193   20778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 14:23:26.949296   20778 command_runner.go:130] > Version:  0.1.0
	I0223 14:23:26.949309   20778 command_runner.go:130] > RuntimeName:  docker
	I0223 14:23:26.949314   20778 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 14:23:26.949319   20778 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 14:23:26.951247   20778 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 14:23:26.951322   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:23:26.973821   20778 command_runner.go:130] > 23.0.1
	I0223 14:23:26.975402   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:23:26.998283   20778 command_runner.go:130] > 23.0.1
	I0223 14:23:27.019920   20778 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 14:23:27.062307   20778 out.go:177]   - env NO_PROXY=192.168.58.2
	I0223 14:23:27.083315   20778 cli_runner.go:164] Run: docker exec -t multinode-359000-m02 dig +short host.docker.internal
	I0223 14:23:27.195252   20778 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:23:27.195375   20778 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:23:27.199966   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:23:27.209948   20778 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000 for IP: 192.168.58.3
	I0223 14:23:27.209967   20778 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:23:27.210144   20778 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:23:27.210194   20778 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:23:27.210204   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 14:23:27.210226   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 14:23:27.210245   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 14:23:27.210265   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 14:23:27.210357   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:23:27.210403   20778 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:23:27.210414   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:23:27.210448   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:23:27.210482   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:23:27.210511   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:23:27.210592   20778 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:23:27.210629   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem -> /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.210652   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.210671   20778 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.210971   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:23:27.228280   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:23:27.245504   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:23:27.262700   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:23:27.279700   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:23:27.296866   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:23:27.314057   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:23:27.331575   20778 ssh_runner.go:195] Run: openssl version
	I0223 14:23:27.336711   20778 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 14:23:27.337121   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:23:27.345212   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.349341   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.349371   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.349417   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:23:27.354392   20778 command_runner.go:130] > 51391683
	I0223 14:23:27.354832   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:23:27.362893   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:23:27.370973   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.374765   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.374889   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.374938   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:23:27.379979   20778 command_runner.go:130] > 3ec20f2e
	I0223 14:23:27.380314   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:23:27.388493   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:23:27.396503   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.400542   20778 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.400616   20778 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.400666   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:23:27.405790   20778 command_runner.go:130] > b5213941
	I0223 14:23:27.406154   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:23:27.414200   20778 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:23:27.438795   20778 command_runner.go:130] > cgroupfs
	I0223 14:23:27.440485   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:23:27.440496   20778 cni.go:136] 2 nodes found, recommending kindnet
	I0223 14:23:27.440504   20778 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:23:27.440521   20778 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-359000 NodeName:multinode-359000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:23:27.440620   20778 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-359000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:23:27.440676   20778 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-359000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:23:27.440746   20778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 14:23:27.447829   20778 command_runner.go:130] > kubeadm
	I0223 14:23:27.447838   20778 command_runner.go:130] > kubectl
	I0223 14:23:27.447842   20778 command_runner.go:130] > kubelet
	I0223 14:23:27.448412   20778 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:23:27.448464   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0223 14:23:27.455798   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0223 14:23:27.468519   20778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 14:23:27.482122   20778 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:23:27.486337   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:23:27.496415   20778 host.go:66] Checking if "multinode-359000" exists ...
	I0223 14:23:27.496589   20778 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:23:27.496614   20778 start.go:301] JoinCluster: &{Name:multinode-359000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-359000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:23:27.496674   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0223 14:23:27.496757   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:23:27.555298   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:23:27.710569   20778 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mjr27n.4th1hcvqu294bu63 --discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 
	I0223 14:23:27.714935   20778 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 14:23:27.714965   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mjr27n.4th1hcvqu294bu63 --discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-359000-m02"
	I0223 14:23:27.757281   20778 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 14:23:27.870687   20778 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0223 14:23:27.870710   20778 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0223 14:23:27.895584   20778 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:23:27.895597   20778 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:23:27.895602   20778 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 14:23:27.963514   20778 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0223 14:23:29.479472   20778 command_runner.go:130] > This node has joined the cluster:
	I0223 14:23:29.479491   20778 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0223 14:23:29.479499   20778 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0223 14:23:29.479507   20778 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0223 14:23:29.482891   20778 command_runner.go:130] ! W0223 22:23:27.756576    1231 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 14:23:29.482909   20778 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 14:23:29.482919   20778 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:23:29.482936   20778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mjr27n.4th1hcvqu294bu63 --discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-359000-m02": (1.767949449s)
	I0223 14:23:29.482953   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0223 14:23:29.613767   20778 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0223 14:23:29.613791   20778 start.go:303] JoinCluster complete in 2.11716459s
	I0223 14:23:29.613799   20778 cni.go:84] Creating CNI manager for ""
	I0223 14:23:29.613804   20778 cni.go:136] 2 nodes found, recommending kindnet
	I0223 14:23:29.613899   20778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 14:23:29.618002   20778 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 14:23:29.618017   20778 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 14:23:29.618029   20778 command_runner.go:130] > Device: a6h/166d	Inode: 267127      Links: 1
	I0223 14:23:29.618037   20778 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 14:23:29.618058   20778 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 14:23:29.618066   20778 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 14:23:29.618074   20778 command_runner.go:130] > Change: 2023-02-23 21:59:23.284856714 +0000
	I0223 14:23:29.618079   20778 command_runner.go:130] >  Birth: -
	I0223 14:23:29.618120   20778 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 14:23:29.618127   20778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 14:23:29.631466   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 14:23:29.819916   20778 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 14:23:29.822219   20778 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 14:23:29.824049   20778 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 14:23:29.832566   20778 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 14:23:29.839355   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:23:29.839560   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:23:29.839848   20778 round_trippers.go:463] GET https://127.0.0.1:58734/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 14:23:29.839855   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.839861   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.839867   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.842392   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.842403   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.842408   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.842414   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.842420   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.842425   20778 round_trippers.go:580]     Content-Length: 291
	I0223 14:23:29.842430   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.842436   20778 round_trippers.go:580]     Audit-Id: 95e67a2e-cb37-46e9-99dd-be393e303326
	I0223 14:23:29.842442   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.842454   20778 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"08757e71-1b54-44ae-9839-af03f5e9d0c0","resourceVersion":"430","creationTimestamp":"2023-02-23T22:22:43Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 14:23:29.842497   20778 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-359000" context rescaled to 1 replicas
	I0223 14:23:29.842511   20778 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 14:23:29.864794   20778 out.go:177] * Verifying Kubernetes components...
	I0223 14:23:29.907748   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:23:29.918575   20778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:23:29.977259   20778 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:23:29.977505   20778 kapi.go:59] client config for multinode-359000: &rest.Config{Host:"https://127.0.0.1:58734", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/multinode-359000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:23:29.977725   20778 node_ready.go:35] waiting up to 6m0s for node "multinode-359000-m02" to be "Ready" ...
	I0223 14:23:29.977763   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:29.977771   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.977782   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.977788   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.979794   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:29.979811   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.979817   20778 round_trippers.go:580]     Audit-Id: 9a05ea0c-5b0e-493f-9c3c-418719e966a9
	I0223 14:23:29.979822   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.979828   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.979832   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.979838   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.979843   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.979928   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:29.980137   20778 node_ready.go:49] node "multinode-359000-m02" has status "Ready":"True"
	I0223 14:23:29.980142   20778 node_ready.go:38] duration metric: took 2.40989ms waiting for node "multinode-359000-m02" to be "Ready" ...
	I0223 14:23:29.980148   20778 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 14:23:29.980192   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods
	I0223 14:23:29.980197   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.980203   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.980210   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.983658   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:29.983673   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.983679   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.983686   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.983693   20778 round_trippers.go:580]     Audit-Id: 1bcd57e4-8c7f-4ac6-9286-83805f0611b1
	I0223 14:23:29.983699   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.983706   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.983712   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.984986   20778 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"476"},"items":[{"metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0223 14:23:29.986650   20778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.986693   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-4hj2n
	I0223 14:23:29.986698   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.986704   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.986711   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.989233   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.989245   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.989251   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.989258   20778 round_trippers.go:580]     Audit-Id: 96ac6135-ac09-4aac-8975-079fb2277c99
	I0223 14:23:29.989266   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.989271   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.989276   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.989283   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.989353   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-4hj2n","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"034c3c0c-5eec-4b91-9daf-1317dc6af725","resourceVersion":"426","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"fab0bb6b-f83a-48be-a0d3-39196956ce61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fab0bb6b-f83a-48be-a0d3-39196956ce61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 14:23:29.989616   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:29.989623   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.989628   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.989634   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.991512   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:29.991521   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.991530   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.991535   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.991540   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.991544   20778 round_trippers.go:580]     Audit-Id: a9cfa654-b2fc-4223-9d0e-b2d55126cfd9
	I0223 14:23:29.991549   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.991554   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.991758   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:29.991950   20778 pod_ready.go:92] pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:29.991956   20778 pod_ready.go:81] duration metric: took 5.29629ms waiting for pod "coredns-787d4945fb-4hj2n" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.991962   20778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.991992   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/etcd-multinode-359000
	I0223 14:23:29.991998   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.992005   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.992013   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.994032   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.994041   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.994047   20778 round_trippers.go:580]     Audit-Id: 6acfb0eb-70ae-4e2f-b912-c01a0f079d36
	I0223 14:23:29.994054   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.994061   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.994066   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.994072   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.994076   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.994125   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-359000","namespace":"kube-system","uid":"398e38cc-24ea-4f91-8b62-51681eb997b4","resourceVersion":"295","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"93ed633257d1dccd5f056f259fe5ad92","kubernetes.io/config.mirror":"93ed633257d1dccd5f056f259fe5ad92","kubernetes.io/config.seen":"2023-02-23T22:22:43.384430470Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 14:23:29.994334   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:29.994340   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.994346   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.994351   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.996547   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.996555   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.996561   20778 round_trippers.go:580]     Audit-Id: 41cbc821-e19c-4e3b-a3b2-72679d7d825d
	I0223 14:23:29.996566   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.996571   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.996576   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.996581   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.996586   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.996645   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:29.996835   20778 pod_ready.go:92] pod "etcd-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:29.996841   20778 pod_ready.go:81] duration metric: took 4.873738ms waiting for pod "etcd-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.996849   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:29.996883   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-359000
	I0223 14:23:29.996888   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.996895   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:29.996901   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.999183   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:29.999192   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:29.999198   20778 round_trippers.go:580]     Audit-Id: 1901105c-4c13-49d4-b7ae-80faed6b3c19
	I0223 14:23:29.999207   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:29.999213   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:29.999217   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:29.999222   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:29.999227   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:29 GMT
	I0223 14:23:29.999298   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-359000","namespace":"kube-system","uid":"39b152d9-2735-457b-a3a1-5e7aca7dc8f3","resourceVersion":"264","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"cfb3605b4e0ab2e0442f07f281676240","kubernetes.io/config.mirror":"cfb3605b4e0ab2e0442f07f281676240","kubernetes.io/config.seen":"2023-02-23T22:22:43.384450086Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 14:23:29.999552   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:29.999559   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:29.999567   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:29.999576   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.001694   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:30.001705   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.001711   20778 round_trippers.go:580]     Audit-Id: 09cec0df-fd0d-4296-9053-360d90ff3633
	I0223 14:23:30.001715   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.001720   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.001730   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.001738   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.001745   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.002609   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:30.003123   20778 pod_ready.go:92] pod "kube-apiserver-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:30.003134   20778 pod_ready.go:81] duration metric: took 6.27815ms waiting for pod "kube-apiserver-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.003143   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.003355   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-359000
	I0223 14:23:30.003373   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.003381   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.003412   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.006235   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:30.006246   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.006252   20778 round_trippers.go:580]     Audit-Id: 4842ebf2-e87b-4db3-911e-87d128a8857c
	I0223 14:23:30.006257   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.006262   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.006268   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.006273   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.006278   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.006354   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-359000","namespace":"kube-system","uid":"361170a2-c3b3-4be5-95ca-334b3b892a82","resourceVersion":"267","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2d2ed3414aeb862284d35d22f8aea7e3","kubernetes.io/config.mirror":"2d2ed3414aeb862284d35d22f8aea7e3","kubernetes.io/config.seen":"2023-02-23T22:22:43.384451227Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 14:23:30.006633   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:30.006639   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.006645   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.006650   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.008622   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:30.008634   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.008639   20778 round_trippers.go:580]     Audit-Id: 1849d8be-57f6-4622-9b96-6136a11c0540
	I0223 14:23:30.008645   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.008650   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.008655   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.008660   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.008665   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.008754   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:30.008939   20778 pod_ready.go:92] pod "kube-controller-manager-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:30.008945   20778 pod_ready.go:81] duration metric: took 5.79652ms waiting for pod "kube-controller-manager-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.008951   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lkkx4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.178098   20778 request.go:622] Waited for 169.095013ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-lkkx4
	I0223 14:23:30.178153   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-lkkx4
	I0223 14:23:30.178163   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.178175   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.178190   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.181894   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:30.181908   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.181914   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.181919   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.181927   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.181933   20778 round_trippers.go:580]     Audit-Id: 0a729be5-01b9-4203-bd78-6647b3bf1e46
	I0223 14:23:30.181939   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.181943   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.182014   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lkkx4","generateName":"kube-proxy-","namespace":"kube-system","uid":"42230635-8bb5-4f57-b543-5ddbeada143a","resourceVersion":"392","creationTimestamp":"2023-02-23T22:22:55Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 14:23:30.377978   20778 request.go:622] Waited for 195.675934ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:30.378032   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:30.378122   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.378136   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.378154   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.381216   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:30.381227   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.381233   20778 round_trippers.go:580]     Audit-Id: 891ee167-80f5-4a4b-a2f6-685bd2308e0c
	I0223 14:23:30.381238   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.381245   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.381250   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.381255   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.381260   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.381514   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:30.381713   20778 pod_ready.go:92] pod "kube-proxy-lkkx4" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:30.381721   20778 pod_ready.go:81] duration metric: took 372.763127ms waiting for pod "kube-proxy-lkkx4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.381727   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-slmv4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:30.577912   20778 request.go:622] Waited for 196.14555ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:30.577950   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:30.577957   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.577966   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.577996   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.580758   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:30.580778   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.580786   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.580796   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.580805   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.580812   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.580822   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.580827   20778 round_trippers.go:580]     Audit-Id: ae135953-cd24-4539-9c1d-cbfcc47bba10
	I0223 14:23:30.580894   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"465","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0223 14:23:30.778151   20778 request.go:622] Waited for 196.986376ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:30.778264   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:30.778274   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:30.778286   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:30.778296   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:30.781546   20778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 14:23:30.781559   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:30.781568   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:30 GMT
	I0223 14:23:30.781579   20778 round_trippers.go:580]     Audit-Id: 2df2455f-c8d7-461b-b5d9-912853d06bb3
	I0223 14:23:30.781587   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:30.781592   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:30.781605   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:30.781614   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:30.781824   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:31.283411   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:31.283438   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:31.283450   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:31.283460   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:31.287837   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:31.287855   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:31.287864   20778 round_trippers.go:580]     Audit-Id: 050f56b6-0ef9-4020-ac2b-d1bf327f0a51
	I0223 14:23:31.287870   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:31.287877   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:31.287885   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:31.287891   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:31.287898   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:31 GMT
	I0223 14:23:31.287995   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:31.288247   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:31.288254   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:31.288259   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:31.288265   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:31.290435   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:31.290445   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:31.290453   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:31.290459   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:31.290465   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:31.290474   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:31.290480   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:31 GMT
	I0223 14:23:31.290485   20778 round_trippers.go:580]     Audit-Id: 29148376-0e84-4a3b-aee6-d5281f652ec5
	I0223 14:23:31.290535   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:31.783311   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:31.783330   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:31.783339   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:31.783350   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:31.786138   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:31.786152   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:31.786161   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:31.786167   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:31.786172   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:31.786179   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:31 GMT
	I0223 14:23:31.786188   20778 round_trippers.go:580]     Audit-Id: 667f8784-285d-488e-b058-5399790c6f9a
	I0223 14:23:31.786199   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:31.786374   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:31.786621   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:31.786629   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:31.786637   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:31.786642   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:31.788953   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:31.788963   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:31.788969   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:31.788976   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:31.788982   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:31 GMT
	I0223 14:23:31.788987   20778 round_trippers.go:580]     Audit-Id: db2c3769-f20a-4065-9e74-1fccc8d56bd4
	I0223 14:23:31.788992   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:31.788997   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:31.789048   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:32.283396   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:32.283421   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:32.283434   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:32.283444   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:32.287690   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:32.287702   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:32.287707   20778 round_trippers.go:580]     Audit-Id: a6d330ae-51bb-4416-b24b-7fbb9169726a
	I0223 14:23:32.287712   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:32.287717   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:32.287722   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:32.287727   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:32.287735   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:32 GMT
	I0223 14:23:32.287786   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:32.288046   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:32.288052   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:32.288058   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:32.288063   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:32.290264   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:32.290274   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:32.290279   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:32.290284   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:32 GMT
	I0223 14:23:32.290291   20778 round_trippers.go:580]     Audit-Id: 5f897324-2dcb-42cd-bbb8-902282ee92d1
	I0223 14:23:32.290296   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:32.290301   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:32.290306   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:32.290351   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:32.783398   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:32.783425   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:32.783437   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:32.783447   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:32.787674   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:32.787691   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:32.787699   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:32.787706   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:32.787713   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:32 GMT
	I0223 14:23:32.787720   20778 round_trippers.go:580]     Audit-Id: 8af6f9e1-4b45-4876-ad20-768bb65c7a12
	I0223 14:23:32.787727   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:32.787734   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:32.787829   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:32.788165   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:32.788172   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:32.788179   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:32.788184   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:32.789968   20778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 14:23:32.789982   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:32.789996   20778 round_trippers.go:580]     Audit-Id: acee255e-91aa-4109-821c-bc1564c5b4ff
	I0223 14:23:32.790010   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:32.790022   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:32.790036   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:32.790045   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:32.790057   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:32 GMT
	I0223 14:23:32.790342   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:32.790499   20778 pod_ready.go:102] pod "kube-proxy-slmv4" in "kube-system" namespace has status "Ready":"False"
	I0223 14:23:33.283788   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:33.283804   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.283813   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.283818   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.286717   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.286728   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.286734   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.286739   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.286744   20778 round_trippers.go:580]     Audit-Id: f67bd822-d284-4466-9733-dc5838b06f2a
	I0223 14:23:33.286749   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.286754   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.286758   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.287093   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"480","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 14:23:33.287381   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:33.287389   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.287395   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.287401   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.289861   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.289871   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.289877   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.289882   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.289888   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.289895   20778 round_trippers.go:580]     Audit-Id: dd169bb2-05fe-4d44-909b-8eee8cbe7ad0
	I0223 14:23:33.289901   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.289906   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.289956   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:33.784030   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-proxy-slmv4
	I0223 14:23:33.784058   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.784072   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.784082   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.788522   20778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 14:23:33.788542   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.788550   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.788557   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.788564   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.788580   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.788587   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.788594   20778 round_trippers.go:580]     Audit-Id: 374ee5eb-d529-4485-8877-2f78793b85f7
	I0223 14:23:33.788688   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-slmv4","generateName":"kube-proxy-","namespace":"kube-system","uid":"b00d8f5e-5c20-4b95-85c7-bc5059faeb93","resourceVersion":"488","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"7a9b877b-c858-4ec2-96ed-bcbe957440c7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7a9b877b-c858-4ec2-96ed-bcbe957440c7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 14:23:33.789009   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000-m02
	I0223 14:23:33.789015   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.789020   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.789026   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.791461   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.791472   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.791477   20778 round_trippers.go:580]     Audit-Id: a355332e-c436-4fc9-a31f-5a0115c969a0
	I0223 14:23:33.791483   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.791487   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.791494   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.791499   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.791504   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.791543   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000-m02","uid":"a0da1c81-2489-44a2-a749-43a0fa68a89f","resourceVersion":"476","creationTimestamp":"2023-02-23T22:23:28Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:23:28Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 4012 chars]
	I0223 14:23:33.791695   20778 pod_ready.go:92] pod "kube-proxy-slmv4" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:33.791705   20778 pod_ready.go:81] duration metric: took 3.409954941s waiting for pod "kube-proxy-slmv4" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:33.791711   20778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:33.791736   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-359000
	I0223 14:23:33.791743   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.791749   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.791754   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.793772   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.793785   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.793797   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.793805   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.793812   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.793819   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.793824   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.793835   20778 round_trippers.go:580]     Audit-Id: 31aa3316-ffe0-456d-b250-c605e11faf04
	I0223 14:23:33.793997   20778 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-359000","namespace":"kube-system","uid":"525e88fd-a6fc-470a-a99a-6ceede2058e5","resourceVersion":"291","creationTimestamp":"2023-02-23T22:22:43Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"68ba80c02e331ad063843d01029c90d4","kubernetes.io/config.mirror":"68ba80c02e331ad063843d01029c90d4","kubernetes.io/config.seen":"2023-02-23T22:22:43.384451945Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T22:22:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 14:23:33.794222   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes/multinode-359000
	I0223 14:23:33.794230   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.794237   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.794245   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.796486   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.796497   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.796502   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.796507   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.796513   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.796522   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.796528   20778 round_trippers.go:580]     Audit-Id: ec0d34c5-2056-4dc8-ad77-faf56577951f
	I0223 14:23:33.796533   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.796586   20778 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T22:22:41Z","fieldsType":"FieldsV1","fi [truncated 5114 chars]
	I0223 14:23:33.796766   20778 pod_ready.go:92] pod "kube-scheduler-multinode-359000" in "kube-system" namespace has status "Ready":"True"
	I0223 14:23:33.796773   20778 pod_ready.go:81] duration metric: took 5.057538ms waiting for pod "kube-scheduler-multinode-359000" in "kube-system" namespace to be "Ready" ...
	I0223 14:23:33.796779   20778 pod_ready.go:38] duration metric: took 3.816603537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 14:23:33.796789   20778 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 14:23:33.796844   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:23:33.806683   20778 system_svc.go:56] duration metric: took 9.890246ms WaitForService to wait for kubelet.
	I0223 14:23:33.806696   20778 kubeadm.go:578] duration metric: took 3.964145372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 14:23:33.806711   20778 node_conditions.go:102] verifying NodePressure condition ...
	I0223 14:23:33.806752   20778 round_trippers.go:463] GET https://127.0.0.1:58734/api/v1/nodes
	I0223 14:23:33.806756   20778 round_trippers.go:469] Request Headers:
	I0223 14:23:33.806762   20778 round_trippers.go:473]     Accept: application/json, */*
	I0223 14:23:33.806767   20778 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 14:23:33.809439   20778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 14:23:33.809452   20778 round_trippers.go:577] Response Headers:
	I0223 14:23:33.809457   20778 round_trippers.go:580]     Date: Thu, 23 Feb 2023 22:23:33 GMT
	I0223 14:23:33.809462   20778 round_trippers.go:580]     Audit-Id: dacc17f1-d27c-4eea-a86b-ace3dec29d17
	I0223 14:23:33.809468   20778 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 14:23:33.809476   20778 round_trippers.go:580]     Content-Type: application/json
	I0223 14:23:33.809482   20778 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03db4027-17ee-46c9-a8ac-dffed1412527
	I0223 14:23:33.809487   20778 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e5ad60c6-492a-4c43-bb95-801c2767bacf
	I0223 14:23:33.809583   20778 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"490"},"items":[{"metadata":{"name":"multinode-359000","uid":"b62f2e5b-5f00-4884-ab14-73c9db9fff82","resourceVersion":"433","creationTimestamp":"2023-02-23T22:22:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-359000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0","minikube.k8s.io/name":"multinode-359000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T14_22_44_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10171 chars]
	I0223 14:23:33.809893   20778 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 14:23:33.809902   20778 node_conditions.go:123] node cpu capacity is 6
	I0223 14:23:33.809917   20778 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 14:23:33.809921   20778 node_conditions.go:123] node cpu capacity is 6
	I0223 14:23:33.809925   20778 node_conditions.go:105] duration metric: took 3.210113ms to run NodePressure ...
	I0223 14:23:33.809933   20778 start.go:228] waiting for startup goroutines ...
	I0223 14:23:33.809950   20778 start.go:242] writing updated cluster config ...
	I0223 14:23:33.837902   20778 ssh_runner.go:195] Run: rm -f paused
	I0223 14:23:33.876362   20778 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 14:23:33.897771   20778 out.go:177] * Done! kubectl is now configured to use "multinode-359000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 22:22:26 UTC, end at Thu 2023-02-23 22:23:45 UTC. --
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028508250Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028532946Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028545167Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028595353Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028610510Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028628532Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028672345Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028744747Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.028778196Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.029136942Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.029207530Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.029630466Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.037382708Z" level=info msg="Loading containers: start."
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.114607166Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.147532455Z" level=info msg="Loading containers: done."
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.155507972Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.155568140Z" level=info msg="Daemon has completed initialization"
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.176474146Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 22:22:30 multinode-359000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.181922416Z" level=info msg="API listen on [::]:2376"
	Feb 23 22:22:30 multinode-359000 dockerd[832]: time="2023-02-23T22:22:30.185594222Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 22:23:10 multinode-359000 dockerd[832]: time="2023-02-23T22:23:10.579553651Z" level=info msg="ignoring event" container=3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:23:10 multinode-359000 dockerd[832]: time="2023-02-23T22:23:10.689696488Z" level=info msg="ignoring event" container=5a80257db7ab63e30b492ef9edac46fd01ddfb0cd659ea3cf2edcbaf3aa5dc66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:23:11 multinode-359000 dockerd[832]: time="2023-02-23T22:23:11.415971732Z" level=info msg="ignoring event" container=4c53a971712a250235eb0b9c9e7bc48e5fb9546c37a799b3c8dff6dac6086269 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:23:11 multinode-359000 dockerd[832]: time="2023-02-23T22:23:11.473443628Z" level=info msg="ignoring event" container=adf3b8437f58143117fd90eae76df14cd9c62c0581498bbd9a99420c1b6210cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	1ccbed670c9b8       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   8 seconds ago        Running             busybox                   0                   dc3a99606f354
	0599d5d10e4b8       5185b96f0becf                                                                                         34 seconds ago       Running             coredns                   1                   58498fd30ffac
	1ee4943e67d73       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              46 seconds ago       Running             kindnet-cni               0                   a62fff4127d3a
	9e2ec0b97da56       6e38f40d628db                                                                                         48 seconds ago       Running             storage-provisioner       0                   adfcc9ef8d54d
	4c53a971712a2       5185b96f0becf                                                                                         48 seconds ago       Exited              coredns                   0                   adf3b8437f581
	c5e089ae7a37b       46a6bb3c77ce0                                                                                         49 seconds ago       Running             kube-proxy                0                   fb25162c4acdd
	369b8cd310185       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   7936273c5c142
	dcd9a92734499       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   271fbaa821695
	e3f83b3f55f93       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   0f5c9fa66b403
	a0907a2dfdc08       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   d291a87615ae3
	
	* 
	* ==> coredns [0599d5d10e4b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:35532 - 761 "HINFO IN 94145845304353067.6871346282503012771. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.01453778s
	[INFO] 10.244.0.3:54613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017723s
	[INFO] 10.244.0.3:47317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046373109s
	[INFO] 10.244.0.3:37974 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003413893s
	[INFO] 10.244.0.3:59396 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011871266s
	[INFO] 10.244.0.3:56055 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135247s
	[INFO] 10.244.0.3:53172 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005329762s
	[INFO] 10.244.0.3:36912 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170424s
	[INFO] 10.244.0.3:58427 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152658s
	[INFO] 10.244.0.3:36494 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004240922s
	[INFO] 10.244.0.3:49408 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155638s
	[INFO] 10.244.0.3:58301 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129775s
	[INFO] 10.244.0.3:47060 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113379s
	[INFO] 10.244.0.3:34216 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133927s
	[INFO] 10.244.0.3:50405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104549s
	[INFO] 10.244.0.3:39896 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102874s
	[INFO] 10.244.0.3:52326 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093339s
	[INFO] 10.244.0.3:49465 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125401s
	[INFO] 10.244.0.3:41493 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000136788s
	[INFO] 10.244.0.3:50529 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109021s
	[INFO] 10.244.0.3:60160 - 5 "PTR IN 2.65.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095334s
	
	* 
	* ==> coredns [4c53a971712a] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 5435270432736386928.6717425237758278781. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 5435270432736386928.6717425237758278781. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-359000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-359000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0
	                    minikube.k8s.io/name=multinode-359000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T14_22_44_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:22:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-359000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:23:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:23:45 +0000   Thu, 23 Feb 2023 22:22:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:23:45 +0000   Thu, 23 Feb 2023 22:22:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:23:45 +0000   Thu, 23 Feb 2023 22:22:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:23:45 +0000   Thu, 23 Feb 2023 22:22:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-359000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    5486766d-d32d-40b6-9600-b780b0c83991
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-ghfsb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 coredns-787d4945fb-4hj2n                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     51s
	  kube-system                 etcd-multinode-359000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         63s
	  kube-system                 kindnet-8hs9x                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      51s
	  kube-system                 kube-apiserver-multinode-359000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-controller-manager-multinode-359000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-proxy-lkkx4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-scheduler-multinode-359000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 49s   kube-proxy       
	  Normal  Starting                 63s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  63s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                63s   kubelet          Node multinode-359000 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  63s   kubelet          Node multinode-359000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s   kubelet          Node multinode-359000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s   kubelet          Node multinode-359000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s   node-controller  Node multinode-359000 event: Registered Node multinode-359000 in Controller
	
	
	Name:               multinode-359000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-359000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:23:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-359000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:23:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:23:29 +0000   Thu, 23 Feb 2023 22:23:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:23:29 +0000   Thu, 23 Feb 2023 22:23:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:23:29 +0000   Thu, 23 Feb 2023 22:23:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:23:29 +0000   Thu, 23 Feb 2023 22:23:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-359000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    5486766d-d32d-40b6-9600-b780b0c83991
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-9zw45    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  kube-system                 kindnet-w7skb               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      18s
	  kube-system                 kube-proxy-slmv4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 18s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x2 over 18s)  kubelet          Node multinode-359000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x2 over 18s)  kubelet          Node multinode-359000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x2 over 18s)  kubelet          Node multinode-359000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                17s                kubelet          Node multinode-359000-m02 status is now: NodeReady
	  Normal  RegisteredNode           16s                node-controller  Node multinode-359000-m02 event: Registered Node multinode-359000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000066] FS-Cache: O-key=[8] '7136580500000000'
	[  +0.000050] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000051] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=0000000032a0fa48
	[  +0.000163] FS-Cache: N-key=[8] '7136580500000000'
	[  +0.002658] FS-Cache: Duplicate cookie detected
	[  +0.000052] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000050] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=000000008e7781d6
	[  +0.000070] FS-Cache: O-key=[8] '7136580500000000'
	[  +0.000028] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000113] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000004fece264
	[  +0.000061] FS-Cache: N-key=[8] '7136580500000000'
	[Feb23 22:08] FS-Cache: Duplicate cookie detected
	[  +0.000034] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000058] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=000000006ea4f74a
	[  +0.000063] FS-Cache: O-key=[8] '7036580500000000'
	[  +0.000034] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000041] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000000a668217
	[  +0.000066] FS-Cache: N-key=[8] '7036580500000000'
	[  +0.413052] FS-Cache: Duplicate cookie detected
	[  +0.000113] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000056] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=00000000634601b2
	[  +0.000097] FS-Cache: O-key=[8] '7736580500000000'
	[  +0.000045] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000055] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000004fece264
	[  +0.000089] FS-Cache: N-key=[8] '7736580500000000'
	
	* 
	* ==> etcd [dcd9a9273449] <==
	* {"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T22:22:38.986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T22:22:38.987Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-359000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:22:38.987Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:22:38.987Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:22:38.987Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:22:38.988Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T22:22:38.989Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-23T22:23:20.368Z","caller":"traceutil/trace.go:171","msg":"trace[1549276223] linearizableReadLoop","detail":"{readStateIndex:453; appliedIndex:452; }","duration":"284.250616ms","start":"2023-02-23T22:23:20.084Z","end":"2023-02-23T22:23:20.368Z","steps":["trace[1549276223] 'read index received'  (duration: 284.082521ms)","trace[1549276223] 'applied index is now lower than readState.Index'  (duration: 167.663µs)"],"step_count":2}
	{"level":"info","ts":"2023-02-23T22:23:20.368Z","caller":"traceutil/trace.go:171","msg":"trace[510834250] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"452.213068ms","start":"2023-02-23T22:23:19.916Z","end":"2023-02-23T22:23:20.368Z","steps":["trace[510834250] 'process raft request'  (duration: 451.938904ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-23T22:23:20.368Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"284.522911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-23T22:23:20.368Z","caller":"traceutil/trace.go:171","msg":"trace[2104721247] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:436; }","duration":"284.661043ms","start":"2023-02-23T22:23:20.084Z","end":"2023-02-23T22:23:20.368Z","steps":["trace[2104721247] 'agreement among raft nodes before linearized reading'  (duration: 284.507096ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-23T22:23:20.368Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"278.009091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-02-23T22:23:20.368Z","caller":"traceutil/trace.go:171","msg":"trace[885391539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:436; }","duration":"278.341795ms","start":"2023-02-23T22:23:20.090Z","end":"2023-02-23T22:23:20.368Z","steps":["trace[885391539] 'agreement among raft nodes before linearized reading'  (duration: 277.992004ms)"],"step_count":1}
	{"level":"warn","ts":"2023-02-23T22:23:20.368Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-02-23T22:23:19.916Z","time spent":"452.248921ms","remote":"127.0.0.1:43488","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:435 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	* 
	* ==> kernel <==
	*  22:23:46 up  1:52,  0 users,  load average: 1.47, 1.28, 0.76
	Linux multinode-359000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [1ee4943e67d7] <==
	* I0223 22:22:59.665719       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0223 22:22:59.665837       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0223 22:22:59.665971       1 main.go:116] setting mtu 1500 for CNI 
	I0223 22:22:59.665981       1 main.go:146] kindnetd IP family: "ipv4"
	I0223 22:22:59.665997       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0223 22:23:00.364511       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:00.364593       1 main.go:227] handling current node
	I0223 22:23:10.379072       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:10.379113       1 main.go:227] handling current node
	I0223 22:23:20.387293       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:20.387320       1 main.go:227] handling current node
	I0223 22:23:30.390653       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:30.390692       1 main.go:227] handling current node
	I0223 22:23:30.390699       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 22:23:30.390704       1 main.go:250] Node multinode-359000-m02 has CIDR [10.244.1.0/24] 
	I0223 22:23:30.390802       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0223 22:23:40.396312       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 22:23:40.396355       1 main.go:227] handling current node
	I0223 22:23:40.396364       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 22:23:40.396368       1 main.go:250] Node multinode-359000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [369b8cd31018] <==
	* I0223 22:22:40.203142       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 22:22:40.203225       1 cache.go:39] Caches are synced for autoregister controller
	I0223 22:22:40.203391       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 22:22:40.203528       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 22:22:40.203588       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 22:22:40.204763       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 22:22:40.204778       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 22:22:40.204790       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 22:22:40.217027       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 22:22:40.926200       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 22:22:41.108587       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0223 22:22:41.111058       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0223 22:22:41.111093       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 22:22:41.586625       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 22:22:41.615595       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 22:22:41.730234       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0223 22:22:41.735113       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0223 22:22:41.735776       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 22:22:41.738959       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 22:22:42.131013       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 22:22:43.274650       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 22:22:43.282271       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0223 22:22:43.289275       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 22:22:55.685737       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0223 22:22:55.735535       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [a0907a2dfdc0] <==
	* I0223 22:22:55.090508       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:22:55.124886       1 shared_informer.go:280] Caches are synced for cronjob
	I0223 22:22:55.130351       1 shared_informer.go:280] Caches are synced for TTL after finished
	I0223 22:22:55.133629       1 shared_informer.go:280] Caches are synced for job
	I0223 22:22:55.187633       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 22:22:55.571851       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:22:55.583552       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 22:22:55.583587       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 22:22:55.692009       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8hs9x"
	I0223 22:22:55.693322       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lkkx4"
	I0223 22:22:55.738145       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0223 22:22:55.977601       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0223 22:22:55.988883       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-4rfn2"
	I0223 22:22:55.997144       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-4hj2n"
	I0223 22:22:56.084317       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-4rfn2"
	W0223 22:23:28.781972       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-359000-m02" does not exist
	I0223 22:23:28.787069       1 range_allocator.go:372] Set node multinode-359000-m02 PodCIDR to [10.244.1.0/24]
	I0223 22:23:28.788533       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-slmv4"
	I0223 22:23:28.788847       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w7skb"
	W0223 22:23:29.428543       1 topologycache.go:232] Can't get CPU or zone information for multinode-359000-m02 node
	W0223 22:23:30.068725       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-359000-m02. Assuming now as a timestamp.
	I0223 22:23:30.068981       1 event.go:294] "Event occurred" object="multinode-359000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-359000-m02 event: Registered Node multinode-359000-m02 in Controller"
	I0223 22:23:34.864832       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 22:23:34.910137       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-9zw45"
	I0223 22:23:34.913892       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-ghfsb"
	
	* 
	* ==> kube-proxy [c5e089ae7a37] <==
	* I0223 22:22:56.598618       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0223 22:22:56.598704       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0223 22:22:56.598718       1 server_others.go:535] "Using iptables proxy"
	I0223 22:22:56.691218       1 server_others.go:176] "Using iptables Proxier"
	I0223 22:22:56.691317       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0223 22:22:56.691326       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0223 22:22:56.691341       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0223 22:22:56.691362       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 22:22:56.692039       1 server.go:655] "Version info" version="v1.26.1"
	I0223 22:22:56.692097       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:22:56.692753       1 config.go:317] "Starting service config controller"
	I0223 22:22:56.692788       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 22:22:56.692810       1 config.go:226] "Starting endpoint slice config controller"
	I0223 22:22:56.692813       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 22:22:56.693303       1 config.go:444] "Starting node config controller"
	I0223 22:22:56.693311       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 22:22:56.792946       1 shared_informer.go:280] Caches are synced for service config
	I0223 22:22:56.793027       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 22:22:56.794356       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e3f83b3f55f9] <==
	* W0223 22:22:40.168265       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 22:22:40.168600       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 22:22:40.168616       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 22:22:40.168626       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 22:22:40.168723       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 22:22:40.168751       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 22:22:40.168774       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 22:22:40.168800       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 22:22:40.168966       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0223 22:22:40.169023       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 22:22:40.169218       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 22:22:40.169260       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0223 22:22:40.169918       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 22:22:40.169960       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 22:22:40.169976       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 22:22:40.169989       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 22:22:41.078835       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 22:22:41.078897       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 22:22:41.274491       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 22:22:41.274537       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 22:22:41.388336       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 22:22:41.388380       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 22:22:41.579345       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0223 22:22:41.579455       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0223 22:22:43.928827       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 22:22:26 UTC, end at Thu 2023-02-23 22:23:47 UTC. --
	Feb 23 22:22:57 multinode-359000 kubelet[2137]: I0223 22:22:57.098200    2137 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8f927b9f-d9b7-4b15-9905-e816d50c40bc-tmp\") pod \"storage-provisioner\" (UID: \"8f927b9f-d9b7-4b15-9905-e816d50c40bc\") " pod="kube-system/storage-provisioner"
	Feb 23 22:22:58 multinode-359000 kubelet[2137]: I0223 22:22:58.228086    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-4hj2n" podStartSLOduration=3.228059507 pod.CreationTimestamp="2023-02-23 22:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:57.827271839 +0000 UTC m=+14.569256017" watchObservedRunningTime="2023-02-23 22:22:58.228059507 +0000 UTC m=+14.970043679"
	Feb 23 22:22:58 multinode-359000 kubelet[2137]: I0223 22:22:58.672194    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lkkx4" podStartSLOduration=3.672166195 pod.CreationTimestamp="2023-02-23 22:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:58.228028265 +0000 UTC m=+14.970012439" watchObservedRunningTime="2023-02-23 22:22:58.672166195 +0000 UTC m=+15.414150373"
	Feb 23 22:22:59 multinode-359000 kubelet[2137]: I0223 22:22:59.069543    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-4rfn2" podStartSLOduration=4.069498635 pod.CreationTimestamp="2023-02-23 22:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:58.672393657 +0000 UTC m=+15.414377836" watchObservedRunningTime="2023-02-23 22:22:59.069498635 +0000 UTC m=+15.811482808"
	Feb 23 22:22:59 multinode-359000 kubelet[2137]: I0223 22:22:59.691160    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.691134144 pod.CreationTimestamp="2023-02-23 22:22:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:59.06975258 +0000 UTC m=+15.811736760" watchObservedRunningTime="2023-02-23 22:22:59.691134144 +0000 UTC m=+16.433118317"
	Feb 23 22:23:04 multinode-359000 kubelet[2137]: I0223 22:23:04.801240    2137 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 23 22:23:04 multinode-359000 kubelet[2137]: I0223 22:23:04.801694    2137 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.792165    2137 scope.go:115] "RemoveContainer" containerID="3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.801977    2137 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-8hs9x" podStartSLOduration=-9.223372021052822e+09 pod.CreationTimestamp="2023-02-23 22:22:55 +0000 UTC" firstStartedPulling="2023-02-23 22:22:56.393870569 +0000 UTC m=+13.135854737" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 22:22:59.691259533 +0000 UTC m=+16.433243702" watchObservedRunningTime="2023-02-23 22:23:10.801953048 +0000 UTC m=+27.543937221"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.802495    2137 scope.go:115] "RemoveContainer" containerID="3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: E0223 22:23:10.803260    2137 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8" containerID="3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.803310    2137 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8} err="failed to get container status \"3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8\": rpc error: code = Unknown desc = Error: No such container: 3bd4acc892cbd5dfaa76bdaef5c1d3448642af9bed83a370aeb6b9a71b0badb8"
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.893197    2137 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q7ws\" (UniqueName: \"kubernetes.io/projected/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-kube-api-access-6q7ws\") pod \"1cd33c9e-c0c4-48ac-88d1-a643a0eebc54\" (UID: \"1cd33c9e-c0c4-48ac-88d1-a643a0eebc54\") "
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.893258    2137 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-config-volume\") pod \"1cd33c9e-c0c4-48ac-88d1-a643a0eebc54\" (UID: \"1cd33c9e-c0c4-48ac-88d1-a643a0eebc54\") "
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: W0223 22:23:10.893451    2137 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.893610    2137 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-config-volume" (OuterVolumeSpecName: "config-volume") pod "1cd33c9e-c0c4-48ac-88d1-a643a0eebc54" (UID: "1cd33c9e-c0c4-48ac-88d1-a643a0eebc54"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.895084    2137 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-kube-api-access-6q7ws" (OuterVolumeSpecName: "kube-api-access-6q7ws") pod "1cd33c9e-c0c4-48ac-88d1-a643a0eebc54" (UID: "1cd33c9e-c0c4-48ac-88d1-a643a0eebc54"). InnerVolumeSpecName "kube-api-access-6q7ws". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.993557    2137 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-6q7ws\" (UniqueName: \"kubernetes.io/projected/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-kube-api-access-6q7ws\") on node \"multinode-359000\" DevicePath \"\""
	Feb 23 22:23:10 multinode-359000 kubelet[2137]: I0223 22:23:10.993628    2137 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54-config-volume\") on node \"multinode-359000\" DevicePath \"\""
	Feb 23 22:23:11 multinode-359000 kubelet[2137]: I0223 22:23:11.482961    2137 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1cd33c9e-c0c4-48ac-88d1-a643a0eebc54 path="/var/lib/kubelet/pods/1cd33c9e-c0c4-48ac-88d1-a643a0eebc54/volumes"
	Feb 23 22:23:11 multinode-359000 kubelet[2137]: I0223 22:23:11.809813    2137 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf3b8437f58143117fd90eae76df14cd9c62c0581498bbd9a99420c1b6210cc"
	Feb 23 22:23:34 multinode-359000 kubelet[2137]: I0223 22:23:34.919177    2137 topology_manager.go:210] "Topology Admit Handler"
	Feb 23 22:23:34 multinode-359000 kubelet[2137]: E0223 22:23:34.919234    2137 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1cd33c9e-c0c4-48ac-88d1-a643a0eebc54" containerName="coredns"
	Feb 23 22:23:34 multinode-359000 kubelet[2137]: I0223 22:23:34.919258    2137 memory_manager.go:346] "RemoveStaleState removing state" podUID="1cd33c9e-c0c4-48ac-88d1-a643a0eebc54" containerName="coredns"
	Feb 23 22:23:35 multinode-359000 kubelet[2137]: I0223 22:23:35.070806    2137 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8472\" (UniqueName: \"kubernetes.io/projected/e915d92f-ced7-45d8-9cde-6049a324e6f5-kube-api-access-j8472\") pod \"busybox-6b86dd6d48-ghfsb\" (UID: \"e915d92f-ced7-45d8-9cde-6049a324e6f5\") " pod="default/busybox-6b86dd6d48-ghfsb"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-359000 -n multinode-359000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-359000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.511789139.exe start -p running-upgrade-254000 --memory=2200 --vm-driver=docker 
E0223 14:36:40.158581   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.511789139.exe start -p running-upgrade-254000 --memory=2200 --vm-driver=docker : exit status 70 (44.075403767s)

                                                
                                                
-- stdout --
	* [running-upgrade-254000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig156707625
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:36:42.313000821 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-254000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:37:01.749263922 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-254000", then "minikube start -p running-upgrade-254000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 194.82 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.33 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.31 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 43.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 121.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 135.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 149.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 205.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 219.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 260.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 274.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 289.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 303.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 330.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 343.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 349.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 377.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 421.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 449.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 460.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 473.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 500.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:37:01.749263922 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.511789139.exe start -p running-upgrade-254000 --memory=2200 --vm-driver=docker 
E0223 14:37:06.102034   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.511789139.exe start -p running-upgrade-254000 --memory=2200 --vm-driver=docker : exit status 70 (4.36850666s)

                                                
                                                
-- stdout --
	* [running-upgrade-254000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig4069004971
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-254000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.511789139.exe start -p running-upgrade-254000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.511789139.exe start -p running-upgrade-254000 --memory=2200 --vm-driver=docker : exit status 70 (4.361519808s)

                                                
                                                
-- stdout --
	* [running-upgrade-254000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig262829121
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-254000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-02-23 14:37:15.530462 -0800 PST m=+2323.610280657
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-254000
helpers_test.go:235: (dbg) docker inspect running-upgrade-254000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9de3ed4abd6a6f7b2d1fdcf55191b22e4e371fe92fa04830e2908e4be89b3b1",
	        "Created": "2023-02-23T22:36:50.502473778Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 162095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:36:50.719338762Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/b9de3ed4abd6a6f7b2d1fdcf55191b22e4e371fe92fa04830e2908e4be89b3b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9de3ed4abd6a6f7b2d1fdcf55191b22e4e371fe92fa04830e2908e4be89b3b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9de3ed4abd6a6f7b2d1fdcf55191b22e4e371fe92fa04830e2908e4be89b3b1/hosts",
	        "LogPath": "/var/lib/docker/containers/b9de3ed4abd6a6f7b2d1fdcf55191b22e4e371fe92fa04830e2908e4be89b3b1/b9de3ed4abd6a6f7b2d1fdcf55191b22e4e371fe92fa04830e2908e4be89b3b1-json.log",
	        "Name": "/running-upgrade-254000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-254000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b24531b2ddfeb4a8da06b55a93bcbfa9f90c1dcad20ffa2fbd73812dbf5b019-init/diff:/var/lib/docker/overlay2/fe163f75d15c1600a86cbda939dc828f32e948a7ba80a10851cf76d1ccad977b/diff:/var/lib/docker/overlay2/bc89f9b633d0df79d775606c72e2b551d9362eee940b481aebffb8273e8a489d/diff:/var/lib/docker/overlay2/57e0af661a5f2fa2ae19898701b2cc8814c0dcf8d09b930829d352647ee3e589/diff:/var/lib/docker/overlay2/9f5009bd56682aeeddfbc59aecc89f04f13bc1b37dbf1ca06fc540d5cba93991/diff:/var/lib/docker/overlay2/7dc8d3304a4fe44e8d19be3bdbe4f47caf8385a4d22e2a9bbd2774da894e7bdd/diff:/var/lib/docker/overlay2/029fd9baa1c4cdcd0a43240a6eadb0e7f4d1421b1d2434fdd87df54f675baf11/diff:/var/lib/docker/overlay2/5829b2c789886a5cd39008c129b46e73f7822a1473abea669637b6bd0efe68e3/diff:/var/lib/docker/overlay2/215a98184a6fa615cf1cc848d59bac9d2ac965359281f0a133d575bc7517d495/diff:/var/lib/docker/overlay2/3bde475daae19a9a6f1d3f4c372cd4b0c6d5f52432bcf09ad14ed428b62a6b95/diff:/var/lib/docker/overlay2/34e6b4
6412104b179a789c1ace1c521f89aaeb25c46bcf84e241b0808ddb923a/diff:/var/lib/docker/overlay2/7c536fcf86d065c285b0ec5a1f285af313f5a15ff977306e6e2cbba95fdc64f7/diff:/var/lib/docker/overlay2/a5bc9269cf95ad2bf297949ae6146b5e75680c1c17b5920b9de16fcec458310d/diff:/var/lib/docker/overlay2/4c4cd194559d13662a7e8531a58939ec2f267d8bff017a39654d075f9b2b880b/diff:/var/lib/docker/overlay2/6cdc854178c07262b5702fcbd3831af9eb85d9c03b0fbe1de673fec75d0969f1/diff:/var/lib/docker/overlay2/3c937187f815d9743ba04c27b3add3e4446625932e5f3996a7effea0c83d1587/diff:/var/lib/docker/overlay2/6ac7243a6fc041dd3d75ed91cc6bf9f0ec556757a168d97780aa6a00b7b7f23e/diff:/var/lib/docker/overlay2/e914889bbbfe1609ea740a0363c6e6ac21844aa775b4d8174565db3d75ace01f/diff:/var/lib/docker/overlay2/c4b8bd019ef4127f6d6dfdd2975426195c30e4cd6616ddd351168fcdaf91ed74/diff:/var/lib/docker/overlay2/9701172dcdfa6982ce257b348de5f29a654e8bf321d54fed773c718337e960d4/diff:/var/lib/docker/overlay2/4fe3e1ad7e3cfc88c7e5be7172e081e5bcc0b5cfb616e6d57c4917393e9ab41d/diff:/var/lib/d
ocker/overlay2/f7cadf028e1496dd2b43fa3a6f5f141f5eec9db02540deff017e1be412897e4b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b24531b2ddfeb4a8da06b55a93bcbfa9f90c1dcad20ffa2fbd73812dbf5b019/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b24531b2ddfeb4a8da06b55a93bcbfa9f90c1dcad20ffa2fbd73812dbf5b019/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b24531b2ddfeb4a8da06b55a93bcbfa9f90c1dcad20ffa2fbd73812dbf5b019/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-254000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-254000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-254000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-254000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-254000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "285835ebc36e3341f97772060b92e6484c9f4ea9189332fd0fe62deeb0fead49",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59831"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59832"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59833"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/285835ebc36e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "d0910d51a9ecdc53d7a40b6972bfae8079f0a9ae297105e4784d777182a5536b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "722434272e12cbeb564120b7c0da7377c2ccab5f40fcf563a974355b96a35fdf",
	                    "EndpointID": "d0910d51a9ecdc53d7a40b6972bfae8079f0a9ae297105e4784d777182a5536b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-254000 -n running-upgrade-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-254000 -n running-upgrade-254000: exit status 6 (376.970834ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:37:15.955020   25507 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-254000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-254000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-254000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-254000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-254000: (2.316593213s)
--- FAIL: TestRunningBinaryUpgrade (61.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (562.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m11.60916622s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-880000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-880000 in cluster kubernetes-upgrade-880000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:34:39.422437   24476 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:34:39.423321   24476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:34:39.423330   24476 out.go:309] Setting ErrFile to fd 2...
	I0223 14:34:39.423338   24476 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:34:39.423596   24476 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:34:39.426330   24476 out.go:303] Setting JSON to false
	I0223 14:34:39.453847   24476 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7454,"bootTime":1677184225,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:34:39.454001   24476 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:34:39.475651   24476 out.go:177] * [kubernetes-upgrade-880000] minikube v1.29.0 on Darwin 13.2
	I0223 14:34:39.517307   24476 notify.go:220] Checking for updates...
	I0223 14:34:39.538139   24476 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:34:39.559120   24476 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:34:39.580206   24476 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:34:39.601240   24476 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:34:39.622167   24476 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:34:39.643160   24476 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:34:39.664686   24476 config.go:182] Loaded profile config "missing-upgrade-960000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 14:34:39.664736   24476 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:34:39.747225   24476 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:34:39.747441   24476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:34:39.953301   24476 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:false NGoroutines:69 SystemTime:2023-02-23 22:34:39.825884024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:34:39.974954   24476 out.go:177] * Using the docker driver based on user configuration
	I0223 14:34:39.995698   24476 start.go:296] selected driver: docker
	I0223 14:34:39.995714   24476 start.go:857] validating driver "docker" against <nil>
	I0223 14:34:39.995729   24476 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:34:39.999209   24476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:34:40.188118   24476 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:false NGoroutines:69 SystemTime:2023-02-23 22:34:40.065039267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:34:40.188268   24476 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 14:34:40.188469   24476 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 14:34:40.210043   24476 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 14:34:40.230816   24476 cni.go:84] Creating CNI manager for ""
	I0223 14:34:40.230838   24476 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 14:34:40.230850   24476 start_flags.go:319] config:
	{Name:kubernetes-upgrade-880000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-880000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:34:40.272966   24476 out.go:177] * Starting control plane node kubernetes-upgrade-880000 in cluster kubernetes-upgrade-880000
	I0223 14:34:40.293802   24476 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:34:40.314845   24476 out.go:177] * Pulling base image ...
	I0223 14:34:40.356841   24476 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 14:34:40.356890   24476 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:34:40.356909   24476 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 14:34:40.356922   24476 cache.go:57] Caching tarball of preloaded images
	I0223 14:34:40.357056   24476 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 14:34:40.357067   24476 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 14:34:40.357398   24476 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/config.json ...
	I0223 14:34:40.357582   24476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/config.json: {Name:mk54e2ead038ea33f3a3212ddee4afdf6fd528cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:34:40.432831   24476 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:34:40.432873   24476 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:34:40.432904   24476 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:34:40.432988   24476 start.go:364] acquiring machines lock for kubernetes-upgrade-880000: {Name:mkd143b35f6b9196e66b282d1f19ff1f3380a692 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:34:40.433190   24476 start.go:368] acquired machines lock for "kubernetes-upgrade-880000" in 188.582µs
	I0223 14:34:40.433229   24476 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-880000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-880000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 14:34:40.433310   24476 start.go:125] createHost starting for "" (driver="docker")
	I0223 14:34:40.455123   24476 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 14:34:40.455487   24476 start.go:159] libmachine.API.Create for "kubernetes-upgrade-880000" (driver="docker")
	I0223 14:34:40.455529   24476 client.go:168] LocalClient.Create starting
	I0223 14:34:40.455662   24476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem
	I0223 14:34:40.455712   24476 main.go:141] libmachine: Decoding PEM data...
	I0223 14:34:40.455732   24476 main.go:141] libmachine: Parsing certificate...
	I0223 14:34:40.455804   24476 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem
	I0223 14:34:40.455839   24476 main.go:141] libmachine: Decoding PEM data...
	I0223 14:34:40.455848   24476 main.go:141] libmachine: Parsing certificate...
	I0223 14:34:40.456309   24476 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-880000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 14:34:40.531428   24476 cli_runner.go:211] docker network inspect kubernetes-upgrade-880000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 14:34:40.531579   24476 network_create.go:281] running [docker network inspect kubernetes-upgrade-880000] to gather additional debugging logs...
	I0223 14:34:40.531600   24476 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-880000
	W0223 14:34:40.619703   24476 cli_runner.go:211] docker network inspect kubernetes-upgrade-880000 returned with exit code 1
	I0223 14:34:40.619775   24476 network_create.go:284] error running [docker network inspect kubernetes-upgrade-880000]: docker network inspect kubernetes-upgrade-880000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-880000
	I0223 14:34:40.619804   24476 network_create.go:286] output of [docker network inspect kubernetes-upgrade-880000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-880000
	
	** /stderr **
	I0223 14:34:40.620002   24476 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 14:34:40.703783   24476 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 14:34:40.704236   24476 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e64a90}
	I0223 14:34:40.704254   24476 network_create.go:123] attempt to create docker network kubernetes-upgrade-880000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 14:34:40.704344   24476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-880000 kubernetes-upgrade-880000
	W0223 14:34:40.782542   24476 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-880000 kubernetes-upgrade-880000 returned with exit code 1
	W0223 14:34:40.782588   24476 network_create.go:148] failed to create docker network kubernetes-upgrade-880000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-880000 kubernetes-upgrade-880000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 14:34:40.782613   24476 network_create.go:115] failed to create docker network kubernetes-upgrade-880000 192.168.58.0/24, will retry: subnet is taken
	I0223 14:34:40.784044   24476 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 14:34:40.784596   24476 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e658e0}
	I0223 14:34:40.784612   24476 network_create.go:123] attempt to create docker network kubernetes-upgrade-880000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 14:34:40.784696   24476 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-880000 kubernetes-upgrade-880000
	I0223 14:34:41.321208   24476 network_create.go:107] docker network kubernetes-upgrade-880000 192.168.67.0/24 created
	I0223 14:34:41.321269   24476 kic.go:117] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-880000" container
	I0223 14:34:41.321415   24476 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 14:34:41.393947   24476 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-880000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-880000 --label created_by.minikube.sigs.k8s.io=true
	I0223 14:34:41.467997   24476 oci.go:103] Successfully created a docker volume kubernetes-upgrade-880000
	I0223 14:34:41.468140   24476 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-880000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-880000 --entrypoint /usr/bin/test -v kubernetes-upgrade-880000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 14:34:42.624558   24476 cli_runner.go:217] Completed: docker run --rm --name kubernetes-upgrade-880000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-880000 --entrypoint /usr/bin/test -v kubernetes-upgrade-880000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: (1.156353995s)
	I0223 14:34:42.624582   24476 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-880000
	I0223 14:34:42.624610   24476 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 14:34:42.624629   24476 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 14:34:42.624800   24476 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-880000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 14:34:48.574672   24476 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-880000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (5.949768563s)
	I0223 14:34:48.574693   24476 kic.go:199] duration metric: took 5.950032 seconds to extract preloaded images to volume
	I0223 14:34:48.574801   24476 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 14:34:48.716079   24476 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-880000 --name kubernetes-upgrade-880000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-880000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-880000 --network kubernetes-upgrade-880000 --ip 192.168.67.2 --volume kubernetes-upgrade-880000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 14:34:49.181661   24476 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-880000 --format={{.State.Running}}
	I0223 14:34:49.240468   24476 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-880000 --format={{.State.Status}}
	I0223 14:34:49.299419   24476 cli_runner.go:164] Run: docker exec kubernetes-upgrade-880000 stat /var/lib/dpkg/alternatives/iptables
	I0223 14:34:49.403077   24476 oci.go:144] the created container "kubernetes-upgrade-880000" has a running status.
	I0223 14:34:49.403124   24476 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa...
	I0223 14:34:49.448048   24476 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 14:34:49.549915   24476 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-880000 --format={{.State.Status}}
	I0223 14:34:49.610004   24476 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 14:34:49.610025   24476 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-880000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 14:34:49.711584   24476 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-880000 --format={{.State.Status}}
	I0223 14:34:49.768623   24476 machine.go:88] provisioning docker machine ...
	I0223 14:34:49.768665   24476 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-880000"
	I0223 14:34:49.768767   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:49.825664   24476 main.go:141] libmachine: Using SSH client type: native
	I0223 14:34:49.826071   24476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59681 <nil> <nil>}
	I0223 14:34:49.826085   24476 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-880000 && echo "kubernetes-upgrade-880000" | sudo tee /etc/hostname
	I0223 14:34:49.969329   24476 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-880000
	
	I0223 14:34:49.969425   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:50.026522   24476 main.go:141] libmachine: Using SSH client type: native
	I0223 14:34:50.026880   24476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59681 <nil> <nil>}
	I0223 14:34:50.026894   24476 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-880000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-880000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-880000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:34:50.163195   24476 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:34:50.163217   24476 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:34:50.163236   24476 ubuntu.go:177] setting up certificates
	I0223 14:34:50.163243   24476 provision.go:83] configureAuth start
	I0223 14:34:50.163318   24476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-880000
	I0223 14:34:50.220833   24476 provision.go:138] copyHostCerts
	I0223 14:34:50.220959   24476 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:34:50.220968   24476 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:34:50.221088   24476 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:34:50.221283   24476 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:34:50.221289   24476 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:34:50.221352   24476 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:34:50.221495   24476 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:34:50.221500   24476 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:34:50.221566   24476 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:34:50.221684   24476 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-880000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-880000]
	I0223 14:34:50.451437   24476 provision.go:172] copyRemoteCerts
	I0223 14:34:50.451496   24476 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:34:50.451546   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:50.509401   24476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59681 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:34:50.605481   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:34:50.622852   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0223 14:34:50.639806   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 14:34:50.656756   24476 provision.go:86] duration metric: configureAuth took 493.49765ms
	I0223 14:34:50.656769   24476 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:34:50.656928   24476 config.go:182] Loaded profile config "kubernetes-upgrade-880000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 14:34:50.657002   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:50.713745   24476 main.go:141] libmachine: Using SSH client type: native
	I0223 14:34:50.714114   24476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59681 <nil> <nil>}
	I0223 14:34:50.714132   24476 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:34:50.848938   24476 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:34:50.848962   24476 ubuntu.go:71] root file system type: overlay
	I0223 14:34:50.849076   24476 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:34:50.849176   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:50.908026   24476 main.go:141] libmachine: Using SSH client type: native
	I0223 14:34:50.908384   24476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59681 <nil> <nil>}
	I0223 14:34:50.908438   24476 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:34:51.048790   24476 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:34:51.048887   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:51.105159   24476 main.go:141] libmachine: Using SSH client type: native
	I0223 14:34:51.105526   24476 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59681 <nil> <nil>}
	I0223 14:34:51.105540   24476 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:34:51.731197   24476 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:34:51.046901737 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 14:34:51.731225   24476 machine.go:91] provisioned docker machine in 1.962572979s
	I0223 14:34:51.731231   24476 client.go:171] LocalClient.Create took 11.275631681s
	I0223 14:34:51.731249   24476 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-880000" took 11.275699661s
	I0223 14:34:51.731257   24476 start.go:300] post-start starting for "kubernetes-upgrade-880000" (driver="docker")
	I0223 14:34:51.731262   24476 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:34:51.731341   24476 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:34:51.731428   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:51.790663   24476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59681 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:34:51.885434   24476 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:34:51.888874   24476 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:34:51.888891   24476 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:34:51.888898   24476 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:34:51.888904   24476 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:34:51.888914   24476 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:34:51.889027   24476 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:34:51.889211   24476 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:34:51.889405   24476 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:34:51.896575   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:34:51.913412   24476 start.go:303] post-start completed in 182.144442ms
	I0223 14:34:51.913931   24476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-880000
	I0223 14:34:51.970705   24476 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/config.json ...
	I0223 14:34:51.971118   24476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:34:51.971174   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:52.027401   24476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59681 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:34:52.120551   24476 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:34:52.125249   24476 start.go:128] duration metric: createHost completed in 11.691865635s
	I0223 14:34:52.125264   24476 start.go:83] releasing machines lock for "kubernetes-upgrade-880000", held for 11.69199993s
	I0223 14:34:52.125337   24476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-880000
	I0223 14:34:52.182204   24476 ssh_runner.go:195] Run: cat /version.json
	I0223 14:34:52.182223   24476 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 14:34:52.182279   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:52.182299   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:52.241285   24476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59681 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:34:52.241642   24476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59681 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:34:52.332603   24476 ssh_runner.go:195] Run: systemctl --version
	I0223 14:34:52.533981   24476 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 14:34:52.539925   24476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 14:34:52.560699   24476 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 14:34:52.560788   24476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 14:34:52.575072   24476 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 14:34:52.583283   24476 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 14:34:52.583300   24476 start.go:485] detecting cgroup driver to use...
	I0223 14:34:52.583312   24476 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:34:52.583401   24476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:34:52.597731   24476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 14:34:52.606715   24476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:34:52.615938   24476 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:34:52.615998   24476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:34:52.624944   24476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:34:52.634200   24476 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:34:52.643487   24476 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:34:52.652651   24476 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:34:52.661502   24476 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:34:52.670873   24476 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:34:52.678704   24476 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:34:52.686761   24476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:34:52.756657   24476 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:34:52.835041   24476 start.go:485] detecting cgroup driver to use...
	I0223 14:34:52.835063   24476 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:34:52.835137   24476 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:34:52.845968   24476 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:34:52.846049   24476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:34:52.856281   24476 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:34:52.871015   24476 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:34:52.939125   24476 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:34:53.022713   24476 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:34:53.022736   24476 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:34:53.046237   24476 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:34:53.128101   24476 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:34:53.417293   24476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:34:53.443853   24476 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:34:53.513248   24476 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0223 14:34:53.513380   24476 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-880000 dig +short host.docker.internal
	I0223 14:34:53.626935   24476 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:34:53.627026   24476 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:34:53.631788   24476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:34:53.642423   24476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:34:53.701073   24476 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 14:34:53.701158   24476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:34:53.722330   24476 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 14:34:53.722346   24476 docker.go:560] Images already preloaded, skipping extraction
	I0223 14:34:53.722439   24476 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:34:53.745272   24476 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 14:34:53.745288   24476 cache_images.go:84] Images are preloaded, skipping loading
	I0223 14:34:53.745393   24476 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:34:53.773364   24476 cni.go:84] Creating CNI manager for ""
	I0223 14:34:53.773391   24476 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 14:34:53.773413   24476 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:34:53.773436   24476 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-880000 NodeName:kubernetes-upgrade-880000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:34:53.773556   24476 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-880000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-880000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:34:53.773628   24476 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-880000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-880000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:34:53.773697   24476 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 14:34:53.782317   24476 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:34:53.782390   24476 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 14:34:53.790815   24476 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0223 14:34:53.818322   24476 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 14:34:53.831182   24476 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0223 14:34:53.844293   24476 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:34:53.848177   24476 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:34:53.857717   24476 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000 for IP: 192.168.67.2
	I0223 14:34:53.857743   24476 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:34:53.857927   24476 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:34:53.857993   24476 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:34:53.858044   24476 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.key
	I0223 14:34:53.858058   24476 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.crt with IP's: []
	I0223 14:34:53.936679   24476 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.crt ...
	I0223 14:34:53.936694   24476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.crt: {Name:mka43ae56385ee0487dbcb60928bc68c09bd11da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:34:53.936987   24476 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.key ...
	I0223 14:34:53.936995   24476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.key: {Name:mka672b98860c6b3d81a3577ac850ae589416287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:34:53.937176   24476 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.key.c7fa3a9e
	I0223 14:34:53.937190   24476 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 14:34:54.039418   24476 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.crt.c7fa3a9e ...
	I0223 14:34:54.039434   24476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.crt.c7fa3a9e: {Name:mk205b94c51f530531a35ab8f4094bb6d9f2f016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:34:54.039729   24476 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.key.c7fa3a9e ...
	I0223 14:34:54.039738   24476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.key.c7fa3a9e: {Name:mkc4417381903d1a9f1a690d4333565037c78155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:34:54.039942   24476 certs.go:333] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.crt
	I0223 14:34:54.040140   24476 certs.go:337] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.key
	I0223 14:34:54.040292   24476 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.key
	I0223 14:34:54.040307   24476 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.crt with IP's: []
	I0223 14:34:54.240961   24476 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.crt ...
	I0223 14:34:54.240984   24476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.crt: {Name:mkdb197fc7d89aee06b9a436e97f71e15875f0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:34:54.241280   24476 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.key ...
	I0223 14:34:54.241288   24476 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.key: {Name:mk7ff0d09ab7798abafdeb65559b82cae29ed0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:34:54.241710   24476 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:34:54.241765   24476 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:34:54.241778   24476 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:34:54.241816   24476 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:34:54.241849   24476 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:34:54.241880   24476 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:34:54.241951   24476 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:34:54.242541   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 14:34:54.260318   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 14:34:54.277729   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 14:34:54.294653   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 14:34:54.311537   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:34:54.328552   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:34:54.347108   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:34:54.365303   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:34:54.383921   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:34:54.402733   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:34:54.420714   24476 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:34:54.439327   24476 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 14:34:54.459708   24476 ssh_runner.go:195] Run: openssl version
	I0223 14:34:54.465892   24476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:34:54.474541   24476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:34:54.478500   24476 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:34:54.478550   24476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:34:54.483967   24476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:34:54.491873   24476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:34:54.499965   24476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:34:54.503991   24476 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:34:54.504033   24476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:34:54.509897   24476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:34:54.518379   24476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:34:54.526927   24476 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:34:54.531052   24476 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:34:54.531107   24476 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:34:54.536682   24476 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:34:54.545524   24476 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-880000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-880000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP:}
	I0223 14:34:54.545646   24476 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:34:54.565592   24476 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 14:34:54.573843   24476 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 14:34:54.582025   24476 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 14:34:54.582086   24476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:34:54.590508   24476 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:34:54.590537   24476 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 14:34:54.640743   24476 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 14:34:54.640792   24476 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 14:34:54.814068   24476 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:34:54.814172   24476 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:34:54.814255   24476 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:34:54.972767   24476 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:34:54.973742   24476 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:34:54.980107   24476 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 14:34:55.043400   24476 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:34:55.085865   24476 out.go:204]   - Generating certificates and keys ...
	I0223 14:34:55.085967   24476 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 14:34:55.086053   24476 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 14:34:55.089525   24476 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 14:34:55.245500   24476 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 14:34:55.439862   24476 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 14:34:55.559820   24476 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 14:34:55.763056   24476 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 14:34:55.763159   24476 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-880000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0223 14:34:56.024234   24476 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 14:34:56.024363   24476 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-880000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0223 14:34:56.101319   24476 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 14:34:56.464676   24476 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 14:34:56.538967   24476 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 14:34:56.539013   24476 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:34:56.685561   24476 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:34:56.867770   24476 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:34:56.966325   24476 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:34:57.018931   24476 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:34:57.019611   24476 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:34:57.042109   24476 out.go:204]   - Booting up control plane ...
	I0223 14:34:57.042292   24476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:34:57.042472   24476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:34:57.042748   24476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:34:57.042912   24476 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:34:57.043242   24476 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:35:37.029098   24476 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 14:35:37.030065   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:35:37.030310   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:35:42.031561   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:35:42.031783   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:35:52.033536   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:35:52.033801   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:36:12.035330   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:36:12.035596   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:36:52.035836   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:36:52.036086   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:36:52.036102   24476 kubeadm.go:322] 
	I0223 14:36:52.036187   24476 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 14:36:52.036249   24476 kubeadm.go:322] 	timed out waiting for the condition
	I0223 14:36:52.036263   24476 kubeadm.go:322] 
	I0223 14:36:52.036300   24476 kubeadm.go:322] This error is likely caused by:
	I0223 14:36:52.036350   24476 kubeadm.go:322] 	- The kubelet is not running
	I0223 14:36:52.036482   24476 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 14:36:52.036496   24476 kubeadm.go:322] 
	I0223 14:36:52.036613   24476 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 14:36:52.036658   24476 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 14:36:52.036697   24476 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 14:36:52.036708   24476 kubeadm.go:322] 
	I0223 14:36:52.036839   24476 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 14:36:52.036981   24476 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 14:36:52.037066   24476 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 14:36:52.037140   24476 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 14:36:52.037225   24476 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 14:36:52.037263   24476 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 14:36:52.040342   24476 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 14:36:52.040445   24476 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 14:36:52.040550   24476 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 14:36:52.040630   24476 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:36:52.040714   24476 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 14:36:52.040776   24476 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 14:36:52.040979   24476 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-880000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-880000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-880000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-880000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 14:36:52.041013   24476 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 14:36:52.458648   24476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:36:52.469737   24476 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 14:36:52.469815   24476 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:36:52.482733   24476 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:36:52.482760   24476 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 14:36:52.540930   24476 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 14:36:52.541015   24476 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 14:36:52.732733   24476 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:36:52.732891   24476 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:36:52.732987   24476 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:36:52.920928   24476 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:36:52.922245   24476 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:36:52.930983   24476 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 14:36:53.005024   24476 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:36:53.025385   24476 out.go:204]   - Generating certificates and keys ...
	I0223 14:36:53.025461   24476 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 14:36:53.025577   24476 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 14:36:53.025695   24476 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 14:36:53.025823   24476 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 14:36:53.025957   24476 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 14:36:53.026073   24476 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 14:36:53.026277   24476 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 14:36:53.026357   24476 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 14:36:53.026461   24476 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 14:36:53.026553   24476 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 14:36:53.026640   24476 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 14:36:53.026734   24476 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:36:53.117473   24476 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:36:53.195810   24476 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:36:53.291726   24476 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:36:53.421749   24476 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:36:53.421964   24476 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:36:53.443461   24476 out.go:204]   - Booting up control plane ...
	I0223 14:36:53.443657   24476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:36:53.443802   24476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:36:53.443948   24476 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:36:53.444068   24476 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:36:53.444286   24476 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:37:33.432572   24476 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 14:37:33.433097   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:37:33.433268   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:37:38.435072   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:37:38.435281   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:37:48.436713   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:37:48.436965   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:38:08.437342   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:38:08.437501   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:38:48.439880   24476 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:38:48.440093   24476 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:38:48.440110   24476 kubeadm.go:322] 
	I0223 14:38:48.440183   24476 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 14:38:48.440224   24476 kubeadm.go:322] 	timed out waiting for the condition
	I0223 14:38:48.440240   24476 kubeadm.go:322] 
	I0223 14:38:48.440316   24476 kubeadm.go:322] This error is likely caused by:
	I0223 14:38:48.440376   24476 kubeadm.go:322] 	- The kubelet is not running
	I0223 14:38:48.440480   24476 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 14:38:48.440490   24476 kubeadm.go:322] 
	I0223 14:38:48.440617   24476 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 14:38:48.440653   24476 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 14:38:48.440691   24476 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 14:38:48.440709   24476 kubeadm.go:322] 
	I0223 14:38:48.440840   24476 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 14:38:48.440946   24476 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 14:38:48.441045   24476 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 14:38:48.441098   24476 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 14:38:48.441187   24476 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 14:38:48.441227   24476 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 14:38:48.443583   24476 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 14:38:48.443640   24476 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 14:38:48.443742   24476 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 14:38:48.443841   24476 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:38:48.443910   24476 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 14:38:48.443963   24476 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 14:38:48.443991   24476 kubeadm.go:403] StartCluster complete in 3m53.897159824s
	I0223 14:38:48.444090   24476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 14:38:48.462960   24476 logs.go:277] 0 containers: []
	W0223 14:38:48.462972   24476 logs.go:279] No container was found matching "kube-apiserver"
	I0223 14:38:48.463045   24476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 14:38:48.481612   24476 logs.go:277] 0 containers: []
	W0223 14:38:48.481626   24476 logs.go:279] No container was found matching "etcd"
	I0223 14:38:48.481693   24476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 14:38:48.500925   24476 logs.go:277] 0 containers: []
	W0223 14:38:48.500938   24476 logs.go:279] No container was found matching "coredns"
	I0223 14:38:48.501005   24476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 14:38:48.520277   24476 logs.go:277] 0 containers: []
	W0223 14:38:48.520293   24476 logs.go:279] No container was found matching "kube-scheduler"
	I0223 14:38:48.520364   24476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 14:38:48.539412   24476 logs.go:277] 0 containers: []
	W0223 14:38:48.539425   24476 logs.go:279] No container was found matching "kube-proxy"
	I0223 14:38:48.539496   24476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 14:38:48.558694   24476 logs.go:277] 0 containers: []
	W0223 14:38:48.558706   24476 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 14:38:48.558775   24476 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 14:38:48.578709   24476 logs.go:277] 0 containers: []
	W0223 14:38:48.578724   24476 logs.go:279] No container was found matching "kindnet"
	I0223 14:38:48.578732   24476 logs.go:123] Gathering logs for Docker ...
	I0223 14:38:48.578741   24476 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 14:38:48.603570   24476 logs.go:123] Gathering logs for container status ...
	I0223 14:38:48.603592   24476 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 14:38:50.647518   24476 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043901533s)
	I0223 14:38:50.647625   24476 logs.go:123] Gathering logs for kubelet ...
	I0223 14:38:50.647633   24476 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 14:38:50.687367   24476 logs.go:123] Gathering logs for dmesg ...
	I0223 14:38:50.687389   24476 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 14:38:50.703521   24476 logs.go:123] Gathering logs for describe nodes ...
	I0223 14:38:50.703538   24476 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 14:38:50.758723   24476 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0223 14:38:50.758751   24476 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 14:38:50.758771   24476 out.go:239] * 
	* 
	W0223 14:38:50.758931   24476 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 14:38:50.758955   24476 out.go:239] * 
	* 
	W0223 14:38:50.759585   24476 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 14:38:50.843044   24476 out.go:177] 
	W0223 14:38:50.885181   24476 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 14:38:50.885267   24476 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 14:38:50.885328   24476 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 14:38:50.906160   24476 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-880000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-880000: (1.589843374s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-880000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-880000 status --format={{.Host}}: exit status 7 (105.105595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (4m42.406558622s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-880000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (461.69352ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-880000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-880000
	    minikube start -p kubernetes-upgrade-880000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8800002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-880000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
E0223 14:43:50.078498   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-880000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (19.496358483s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-02-23 14:43:55.22921 -0800 PST m=+2723.199804806
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-880000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-880000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "87d69cae2d6eba4bc0f74856d96d7561717240bf0c5c3bea4078dd85902392a3",
	        "Created": "2023-02-23T22:34:48.769919541Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 173880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:38:54.267639757Z",
	            "FinishedAt": "2023-02-23T22:38:51.452990344Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/87d69cae2d6eba4bc0f74856d96d7561717240bf0c5c3bea4078dd85902392a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/87d69cae2d6eba4bc0f74856d96d7561717240bf0c5c3bea4078dd85902392a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/87d69cae2d6eba4bc0f74856d96d7561717240bf0c5c3bea4078dd85902392a3/hosts",
	        "LogPath": "/var/lib/docker/containers/87d69cae2d6eba4bc0f74856d96d7561717240bf0c5c3bea4078dd85902392a3/87d69cae2d6eba4bc0f74856d96d7561717240bf0c5c3bea4078dd85902392a3-json.log",
	        "Name": "/kubernetes-upgrade-880000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-880000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-880000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b5179e2425874816d336b88b8c8a6242d2a512e940909614eaad30e071ed346c-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b5179e2425874816d336b88b8c8a6242d2a512e940909614eaad30e071ed346c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b5179e2425874816d336b88b8c8a6242d2a512e940909614eaad30e071ed346c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b5179e2425874816d336b88b8c8a6242d2a512e940909614eaad30e071ed346c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-880000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-880000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-880000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-880000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-880000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a602d11182bd50780ec0868f33431aabee2508570d5d0a9c1f8be3fe07c5bee",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59965"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59966"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59967"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9a602d11182b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-880000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "87d69cae2d6e",
	                        "kubernetes-upgrade-880000"
	                    ],
	                    "NetworkID": "066886c8adbfe62ecf67031f1bb725aa8a544307faf06d87e7bf3b55aa2efb43",
	                    "EndpointID": "af2a20c403f1ca06f4f1066bb3c510f6ad5d66cc24ae20eb03cd2cfd7ef0f71b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-880000 -n kubernetes-upgrade-880000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-880000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-880000 logs -n 25: (2.754634141s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-880000   | kubernetes-upgrade-880000 | jenkins | v1.29.0 | 23 Feb 23 14:38 PST | 23 Feb 23 14:38 PST |
	| unpause | -p pause-731000                | pause-731000              | jenkins | v1.29.0 | 23 Feb 23 14:38 PST | 23 Feb 23 14:38 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| pause   | -p pause-731000                | pause-731000              | jenkins | v1.29.0 | 23 Feb 23 14:38 PST | 23 Feb 23 14:38 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-880000   | kubernetes-upgrade-880000 | jenkins | v1.29.0 | 23 Feb 23 14:38 PST | 23 Feb 23 14:43 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| delete  | -p pause-731000                | pause-731000              | jenkins | v1.29.0 | 23 Feb 23 14:38 PST | 23 Feb 23 14:38 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| delete  | -p pause-731000                | pause-731000              | jenkins | v1.29.0 | 23 Feb 23 14:38 PST | 23 Feb 23 14:38 PST |
	| start   | -p NoKubernetes-972000         | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:38 PST |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-972000         | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:38 PST | 23 Feb 23 14:39 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-972000         | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:39 PST | 23 Feb 23 14:39 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-972000         | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:39 PST | 23 Feb 23 14:39 PST |
	| start   | -p NoKubernetes-972000         | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:39 PST | 23 Feb 23 14:39 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-972000 sudo    | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:39 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-972000         | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:40 PST | 23 Feb 23 14:40 PST |
	| start   | -p NoKubernetes-972000         | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:40 PST | 23 Feb 23 14:40 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-972000 sudo    | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:40 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-972000         | NoKubernetes-972000       | jenkins | v1.29.0 | 23 Feb 23 14:40 PST | 23 Feb 23 14:40 PST |
	| start   | -p force-systemd-flag-212000   | force-systemd-flag-212000 | jenkins | v1.29.0 | 23 Feb 23 14:40 PST | 23 Feb 23 14:40 PST |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-212000      | force-systemd-flag-212000 | jenkins | v1.29.0 | 23 Feb 23 14:40 PST | 23 Feb 23 14:40 PST |
	|         | ssh docker info --format       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-212000   | force-systemd-flag-212000 | jenkins | v1.29.0 | 23 Feb 23 14:40 PST | 23 Feb 23 14:40 PST |
	| start   | -p force-systemd-env-256000    | force-systemd-env-256000  | jenkins | v1.29.0 | 23 Feb 23 14:41 PST | 23 Feb 23 14:42 PST |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-256000       | force-systemd-env-256000  | jenkins | v1.29.0 | 23 Feb 23 14:42 PST | 23 Feb 23 14:42 PST |
	|         | ssh docker info --format       |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-256000    | force-systemd-env-256000  | jenkins | v1.29.0 | 23 Feb 23 14:42 PST | 23 Feb 23 14:42 PST |
	| start   | -p cert-expiration-912000      | cert-expiration-912000    | jenkins | v1.29.0 | 23 Feb 23 14:42 PST | 23 Feb 23 14:42 PST |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-880000   | kubernetes-upgrade-880000 | jenkins | v1.29.0 | 23 Feb 23 14:43 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-880000   | kubernetes-upgrade-880000 | jenkins | v1.29.0 | 23 Feb 23 14:43 PST | 23 Feb 23 14:43 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 14:43:35
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 14:43:35.776248   27151 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:43:35.776415   27151 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:43:35.776425   27151 out.go:309] Setting ErrFile to fd 2...
	I0223 14:43:35.776436   27151 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:43:35.776546   27151 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:43:35.777870   27151 out.go:303] Setting JSON to false
	I0223 14:43:35.796612   27151 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7989,"bootTime":1677184226,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:43:35.796705   27151 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:43:35.818619   27151 out.go:177] * [kubernetes-upgrade-880000] minikube v1.29.0 on Darwin 13.2
	I0223 14:43:35.860672   27151 notify.go:220] Checking for updates...
	I0223 14:43:35.860718   27151 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:43:35.882649   27151 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:43:35.903548   27151 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:43:35.924541   27151 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:43:35.982354   27151 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:43:36.040631   27151 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:43:36.079935   27151 config.go:182] Loaded profile config "kubernetes-upgrade-880000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:43:36.080300   27151 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:43:36.142710   27151 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:43:36.142977   27151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:43:36.285848   27151 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:61 SystemTime:2023-02-23 22:43:36.193852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default n
ame=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/U
sers/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:43:36.327218   27151 out.go:177] * Using the docker driver based on existing profile
	I0223 14:43:36.348295   27151 start.go:296] selected driver: docker
	I0223 14:43:36.348330   27151 start.go:857] validating driver "docker" against &{Name:kubernetes-upgrade-880000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-880000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:43:36.348386   27151 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:43:36.351032   27151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:43:36.494391   27151 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:61 SystemTime:2023-02-23 22:43:36.401278046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:43:36.494545   27151 cni.go:84] Creating CNI manager for ""
	I0223 14:43:36.494559   27151 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 14:43:36.494571   27151 start_flags.go:319] config:
	{Name:kubernetes-upgrade-880000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-880000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP:}
	I0223 14:43:36.552920   27151 out.go:177] * Starting control plane node kubernetes-upgrade-880000 in cluster kubernetes-upgrade-880000
	I0223 14:43:36.574313   27151 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:43:36.611893   27151 out.go:177] * Pulling base image ...
	I0223 14:43:36.648885   27151 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:43:36.648912   27151 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:43:36.648959   27151 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 14:43:36.648973   27151 cache.go:57] Caching tarball of preloaded images
	I0223 14:43:36.649109   27151 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 14:43:36.649124   27151 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 14:43:36.649695   27151 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/config.json ...
	I0223 14:43:36.705466   27151 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:43:36.705489   27151 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:43:36.705517   27151 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:43:36.705636   27151 start.go:364] acquiring machines lock for kubernetes-upgrade-880000: {Name:mkd143b35f6b9196e66b282d1f19ff1f3380a692 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:43:36.705731   27151 start.go:368] acquired machines lock for "kubernetes-upgrade-880000" in 75.971µs
	I0223 14:43:36.705756   27151 start.go:96] Skipping create...Using existing machine configuration
	I0223 14:43:36.705764   27151 fix.go:55] fixHost starting: 
	I0223 14:43:36.706049   27151 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-880000 --format={{.State.Status}}
	I0223 14:43:36.764729   27151 fix.go:103] recreateIfNeeded on kubernetes-upgrade-880000: state=Running err=<nil>
	W0223 14:43:36.764773   27151 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 14:43:36.823295   27151 out.go:177] * Updating the running docker "kubernetes-upgrade-880000" container ...
	I0223 14:43:36.860398   27151 machine.go:88] provisioning docker machine ...
	I0223 14:43:36.860460   27151 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-880000"
	I0223 14:43:36.860629   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:36.919897   27151 main.go:141] libmachine: Using SSH client type: native
	I0223 14:43:36.920304   27151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59968 <nil> <nil>}
	I0223 14:43:36.920316   27151 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-880000 && echo "kubernetes-upgrade-880000" | sudo tee /etc/hostname
	I0223 14:43:37.063183   27151 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-880000
	
	I0223 14:43:37.063272   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:37.122763   27151 main.go:141] libmachine: Using SSH client type: native
	I0223 14:43:37.123108   27151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59968 <nil> <nil>}
	I0223 14:43:37.123121   27151 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-880000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-880000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-880000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:43:37.258780   27151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:43:37.258807   27151 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:43:37.258825   27151 ubuntu.go:177] setting up certificates
	I0223 14:43:37.258837   27151 provision.go:83] configureAuth start
	I0223 14:43:37.258920   27151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-880000
	I0223 14:43:37.316522   27151 provision.go:138] copyHostCerts
	I0223 14:43:37.316629   27151 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:43:37.316643   27151 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:43:37.316743   27151 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:43:37.316959   27151 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:43:37.316965   27151 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:43:37.317024   27151 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:43:37.317173   27151 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:43:37.317179   27151 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:43:37.317245   27151 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:43:37.317364   27151 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-880000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-880000]
	I0223 14:43:37.494247   27151 provision.go:172] copyRemoteCerts
	I0223 14:43:37.494317   27151 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:43:37.494373   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:37.553336   27151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59968 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:43:37.647448   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:43:37.664863   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0223 14:43:37.682808   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 14:43:37.700950   27151 provision.go:86] duration metric: configureAuth took 442.083603ms
	I0223 14:43:37.700968   27151 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:43:37.701126   27151 config.go:182] Loaded profile config "kubernetes-upgrade-880000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:43:37.701191   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:37.759175   27151 main.go:141] libmachine: Using SSH client type: native
	I0223 14:43:37.759528   27151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59968 <nil> <nil>}
	I0223 14:43:37.759538   27151 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:43:37.892342   27151 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:43:37.892369   27151 ubuntu.go:71] root file system type: overlay
	I0223 14:43:37.892476   27151 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:43:37.892569   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:37.952081   27151 main.go:141] libmachine: Using SSH client type: native
	I0223 14:43:37.952431   27151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59968 <nil> <nil>}
	I0223 14:43:37.952480   27151 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:43:38.092401   27151 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:43:38.092513   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:38.151424   27151 main.go:141] libmachine: Using SSH client type: native
	I0223 14:43:38.151783   27151 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59968 <nil> <nil>}
	I0223 14:43:38.151797   27151 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:43:38.288353   27151 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:43:38.288371   27151 machine.go:91] provisioned docker machine in 1.427910846s
	I0223 14:43:38.288382   27151 start.go:300] post-start starting for "kubernetes-upgrade-880000" (driver="docker")
	I0223 14:43:38.288387   27151 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:43:38.288475   27151 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:43:38.288536   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:38.346606   27151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59968 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:43:38.441128   27151 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:43:38.444805   27151 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:43:38.444824   27151 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:43:38.444832   27151 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:43:38.444837   27151 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:43:38.444844   27151 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:43:38.444935   27151 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:43:38.445123   27151 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:43:38.445289   27151 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:43:38.452810   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:43:38.469918   27151 start.go:303] post-start completed in 181.520943ms
	I0223 14:43:38.470019   27151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:43:38.470074   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:38.529073   27151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59968 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:43:38.619545   27151 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:43:38.624850   27151 fix.go:57] fixHost completed within 1.919026251s
	I0223 14:43:38.624866   27151 start.go:83] releasing machines lock for "kubernetes-upgrade-880000", held for 1.919072565s
	I0223 14:43:38.624995   27151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-880000
	I0223 14:43:38.684742   27151 ssh_runner.go:195] Run: cat /version.json
	I0223 14:43:38.684779   27151 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 14:43:38.684816   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:38.684846   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:38.750906   27151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59968 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:43:38.751086   27151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59968 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:43:38.894059   27151 ssh_runner.go:195] Run: systemctl --version
	I0223 14:43:38.899140   27151 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 14:43:38.903938   27151 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 14:43:38.904001   27151 ssh_runner.go:195] Run: which cri-dockerd
	I0223 14:43:38.908354   27151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 14:43:38.916001   27151 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 14:43:38.929335   27151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 14:43:38.937453   27151 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 14:43:38.944937   27151 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0223 14:43:38.944952   27151 start.go:485] detecting cgroup driver to use...
	I0223 14:43:38.944962   27151 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:43:38.945051   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:43:38.958214   27151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 14:43:38.967134   27151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:43:38.976415   27151 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:43:38.976485   27151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:43:38.985353   27151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:43:38.994051   27151 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:43:39.002433   27151 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:43:39.010985   27151 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:43:39.018761   27151 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:43:39.027376   27151 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:43:39.034523   27151 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:43:39.041708   27151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:43:39.120486   27151 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:43:40.941720   27151 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (1.821161516s)
	I0223 14:43:40.941739   27151 start.go:485] detecting cgroup driver to use...
	I0223 14:43:40.941751   27151 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:43:40.941819   27151 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:43:40.952411   27151 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:43:40.952475   27151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:43:40.962506   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:43:40.979993   27151 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:43:41.092286   27151 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:43:41.219159   27151 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:43:41.219179   27151 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:43:41.232738   27151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:43:41.334390   27151 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:43:41.961275   27151 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:43:42.033259   27151 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 14:43:42.107206   27151 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 14:43:42.176530   27151 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:43:42.242605   27151 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 14:43:42.259155   27151 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 14:43:42.259253   27151 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 14:43:42.263899   27151 start.go:553] Will wait 60s for crictl version
	I0223 14:43:42.263964   27151 ssh_runner.go:195] Run: which crictl
	I0223 14:43:42.268190   27151 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 14:43:42.333079   27151 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 14:43:42.333191   27151 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:43:42.359000   27151 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:43:42.414889   27151 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 14:43:42.414996   27151 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-880000 dig +short host.docker.internal
	I0223 14:43:42.533406   27151 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:43:42.533516   27151 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:43:42.538150   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:42.597047   27151 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 14:43:42.597126   27151 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:43:42.618767   27151 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 14:43:42.618784   27151 docker.go:560] Images already preloaded, skipping extraction
	I0223 14:43:42.618890   27151 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:43:42.641994   27151 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 14:43:42.642013   27151 cache_images.go:84] Images are preloaded, skipping loading
	I0223 14:43:42.642093   27151 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:43:42.668368   27151 cni.go:84] Creating CNI manager for ""
	I0223 14:43:42.668387   27151 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 14:43:42.668428   27151 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:43:42.668453   27151 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-880000 NodeName:kubernetes-upgrade-880000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:43:42.668580   27151 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-880000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:43:42.668661   27151 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-880000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-880000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:43:42.668724   27151 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 14:43:42.676547   27151 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:43:42.676609   27151 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 14:43:42.683962   27151 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (457 bytes)
	I0223 14:43:42.696928   27151 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 14:43:42.709935   27151 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0223 14:43:42.722796   27151 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:43:42.726758   27151 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000 for IP: 192.168.67.2
	I0223 14:43:42.726775   27151 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:43:42.726940   27151 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:43:42.726996   27151 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:43:42.727089   27151 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.key
	I0223 14:43:42.727181   27151 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.key.c7fa3a9e
	I0223 14:43:42.727248   27151 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.key
	I0223 14:43:42.727458   27151 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:43:42.727494   27151 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:43:42.727504   27151 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:43:42.727534   27151 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:43:42.727567   27151 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:43:42.727597   27151 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:43:42.727671   27151 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:43:42.728271   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 14:43:42.745374   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 14:43:42.762394   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 14:43:42.780274   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 14:43:42.802151   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:43:42.827387   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:43:42.898166   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:43:42.917467   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:43:42.981994   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:43:43.003277   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:43:43.085485   27151 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:43:43.107684   27151 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 14:43:43.187474   27151 ssh_runner.go:195] Run: openssl version
	I0223 14:43:43.193626   27151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:43:43.203055   27151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:43:43.207674   27151 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:43:43.207739   27151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:43:43.214235   27151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:43:43.222716   27151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:43:43.232071   27151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:43:43.237013   27151 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:43:43.237087   27151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:43:43.242738   27151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:43:43.290685   27151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:43:43.301232   27151 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:43:43.305710   27151 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:43:43.305766   27151 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:43:43.312212   27151 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:43:43.325310   27151 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-880000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-880000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:43:43.325495   27151 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:43:43.388472   27151 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 14:43:43.397106   27151 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 14:43:43.397124   27151 kubeadm.go:633] restartCluster start
	I0223 14:43:43.397186   27151 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 14:43:43.405814   27151 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:43:43.405927   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:43.468371   27151 kubeconfig.go:92] found "kubernetes-upgrade-880000" server: "https://127.0.0.1:59967"
	I0223 14:43:43.469156   27151 kapi.go:59] client config for kubernetes-upgrade-880000: &rest.Config{Host:"https://127.0.0.1:59967", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:43:43.469915   27151 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 14:43:43.482061   27151 api_server.go:165] Checking apiserver status ...
	I0223 14:43:43.482148   27151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:43:43.491301   27151 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12876/cgroup
	W0223 14:43:43.499768   27151 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12876/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:43:43.499823   27151 ssh_runner.go:195] Run: ls
	I0223 14:43:43.503867   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:45.877223   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 14:43:45.877286   27151 retry.go:31] will retry after 288.567527ms: https://127.0.0.1:59967/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 14:43:46.166415   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:46.173027   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 14:43:46.173053   27151 retry.go:31] will retry after 339.360019ms: https://127.0.0.1:59967/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 14:43:46.512597   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:46.519305   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 14:43:46.519323   27151 retry.go:31] will retry after 484.209198ms: https://127.0.0.1:59967/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 14:43:47.003828   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:47.009033   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 14:43:47.009060   27151 retry.go:31] will retry after 442.171429ms: https://127.0.0.1:59967/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 14:43:47.452175   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:47.459416   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 200:
	ok
	I0223 14:43:47.471153   27151 system_pods.go:86] 5 kube-system pods found
	I0223 14:43:47.471169   27151 system_pods.go:89] "etcd-kubernetes-upgrade-880000" [89c5fc1c-f49a-47b2-9cfc-53ccb522f4e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 14:43:47.471177   27151 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-880000" [568459a1-1148-4359-8ec4-40c3242a7cf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 14:43:47.471187   27151 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-880000" [6aeb5187-83b3-4c2c-ba9b-4e106541995b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 14:43:47.471193   27151 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-880000" [5d61eece-f524-4081-8254-fac09879bbd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 14:43:47.471199   27151 system_pods.go:89] "storage-provisioner" [d32fb6ff-7b0b-4d71-8f2a-cf3bdbfe7b53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0223 14:43:47.471206   27151 kubeadm.go:617] needs reconfigure: missing components: kube-dns, kube-proxy
	I0223 14:43:47.471213   27151 kubeadm.go:1120] stopping kube-system containers ...
	I0223 14:43:47.471280   27151 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:43:47.492675   27151 docker.go:456] Stopping containers: [6180a9ab06e6 05b3f347caa1 bec87bf3f3e3 da67f799e6d5 9e6bdc5f79f9 edab9f7f36bd b395a89cf0a9 d71c3481b35d 6de59a2e3486 4f4f880175d0 7742c82d8eec 17e463b6b56b 495a6387b4fc 099567766e42 131aee5405c4 52b0c5f63920 9778871ccc85]
	I0223 14:43:47.492776   27151 ssh_runner.go:195] Run: docker stop 6180a9ab06e6 05b3f347caa1 bec87bf3f3e3 da67f799e6d5 9e6bdc5f79f9 edab9f7f36bd b395a89cf0a9 d71c3481b35d 6de59a2e3486 4f4f880175d0 7742c82d8eec 17e463b6b56b 495a6387b4fc 099567766e42 131aee5405c4 52b0c5f63920 9778871ccc85
	I0223 14:43:48.088004   27151 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 14:43:48.131617   27151 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:43:48.185287   27151 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 23 22:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 23 22:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb 23 22:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 23 22:43 /etc/kubernetes/scheduler.conf
	
	I0223 14:43:48.185372   27151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 14:43:48.196324   27151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 14:43:48.205763   27151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 14:43:48.214027   27151 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:43:48.214092   27151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0223 14:43:48.222082   27151 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 14:43:48.230287   27151 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:43:48.230389   27151 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0223 14:43:48.238224   27151 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 14:43:48.246031   27151 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 14:43:48.246042   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:43:48.303550   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:43:49.036499   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:43:49.174160   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:43:49.232098   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:43:49.321491   27151 api_server.go:51] waiting for apiserver process to appear ...
	I0223 14:43:49.321590   27151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:43:49.887883   27151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:43:50.388467   27151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:43:50.398568   27151 api_server.go:71] duration metric: took 1.077045251s to wait for apiserver process to appear ...
	I0223 14:43:50.398584   27151 api_server.go:87] waiting for apiserver healthz status ...
	I0223 14:43:50.398593   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:52.192505   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 14:43:52.192524   27151 api_server.go:102] status: https://127.0.0.1:59967/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 14:43:52.694229   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:52.701172   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 14:43:52.701189   27151 api_server.go:102] status: https://127.0.0.1:59967/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 14:43:53.192749   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:53.198230   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 14:43:53.198247   27151 api_server.go:102] status: https://127.0.0.1:59967/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 14:43:53.693647   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:53.700495   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 200:
	ok
	I0223 14:43:53.707813   27151 api_server.go:140] control plane version: v1.26.1
	I0223 14:43:53.707826   27151 api_server.go:130] duration metric: took 3.30913952s to wait for apiserver health ...
	I0223 14:43:53.707832   27151 cni.go:84] Creating CNI manager for ""
	I0223 14:43:53.707842   27151 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 14:43:53.732140   27151 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 14:43:53.753045   27151 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 14:43:53.761325   27151 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 14:43:53.773911   27151 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 14:43:53.779487   27151 system_pods.go:59] 5 kube-system pods found
	I0223 14:43:53.779504   27151 system_pods.go:61] "etcd-kubernetes-upgrade-880000" [89c5fc1c-f49a-47b2-9cfc-53ccb522f4e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 14:43:53.779511   27151 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-880000" [568459a1-1148-4359-8ec4-40c3242a7cf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 14:43:53.779520   27151 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-880000" [6aeb5187-83b3-4c2c-ba9b-4e106541995b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 14:43:53.779527   27151 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-880000" [5d61eece-f524-4081-8254-fac09879bbd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 14:43:53.779533   27151 system_pods.go:61] "storage-provisioner" [d32fb6ff-7b0b-4d71-8f2a-cf3bdbfe7b53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0223 14:43:53.779538   27151 system_pods.go:74] duration metric: took 5.616631ms to wait for pod list to return data ...
	I0223 14:43:53.779548   27151 node_conditions.go:102] verifying NodePressure condition ...
	I0223 14:43:53.782878   27151 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 14:43:53.782892   27151 node_conditions.go:123] node cpu capacity is 6
	I0223 14:43:53.782902   27151 node_conditions.go:105] duration metric: took 3.348618ms to run NodePressure ...
	I0223 14:43:53.782921   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:43:53.914399   27151 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 14:43:53.921825   27151 ops.go:34] apiserver oom_adj: -16
	I0223 14:43:53.921836   27151 kubeadm.go:637] restartCluster took 10.524401427s
	I0223 14:43:53.921843   27151 kubeadm.go:403] StartCluster complete in 10.596237503s
	I0223 14:43:53.921857   27151 settings.go:142] acquiring lock: {Name:mk5254606ab776d081c4c857df8d4e00b86fee57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:43:53.921936   27151 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:43:53.922656   27151 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/kubeconfig: {Name:mk366c13f6069774a57c4d74123d5172c8522a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:43:53.929933   27151 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 14:43:53.929959   27151 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 14:43:53.930016   27151 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-880000"
	I0223 14:43:53.930032   27151 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-880000"
	I0223 14:43:53.930037   27151 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-880000"
	W0223 14:43:53.930039   27151 addons.go:236] addon storage-provisioner should already be in state true
	I0223 14:43:53.930074   27151 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-880000"
	I0223 14:43:53.930088   27151 host.go:66] Checking if "kubernetes-upgrade-880000" exists ...
	I0223 14:43:53.930111   27151 config.go:182] Loaded profile config "kubernetes-upgrade-880000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:43:53.930328   27151 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-880000 --format={{.State.Status}}
	I0223 14:43:53.930421   27151 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-880000 --format={{.State.Status}}
	I0223 14:43:53.930444   27151 kapi.go:59] client config for kubernetes-upgrade-880000: &rest.Config{Host:"https://127.0.0.1:59967", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:43:53.937018   27151 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-880000" context rescaled to 1 replicas
	I0223 14:43:53.937047   27151 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 14:43:53.971844   27151 out.go:177] * Verifying Kubernetes components...
	I0223 14:43:53.992964   27151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:43:54.005559   27151 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0223 14:43:54.028008   27151 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 14:43:54.009038   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:54.010401   27151 kapi.go:59] client config for kubernetes-upgrade-880000: &rest.Config{Host:"https://127.0.0.1:59967", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubernetes-upgrade-880000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 14:43:54.049010   27151 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 14:43:54.049025   27151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 14:43:54.049132   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:54.060370   27151 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-880000"
	W0223 14:43:54.060391   27151 addons.go:236] addon default-storageclass should already be in state true
	I0223 14:43:54.060408   27151 host.go:66] Checking if "kubernetes-upgrade-880000" exists ...
	I0223 14:43:54.060756   27151 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-880000 --format={{.State.Status}}
	I0223 14:43:54.116461   27151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59968 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:43:54.116577   27151 api_server.go:51] waiting for apiserver process to appear ...
	I0223 14:43:54.116652   27151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:43:54.126256   27151 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 14:43:54.126268   27151 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 14:43:54.126348   27151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-880000
	I0223 14:43:54.129222   27151 api_server.go:71] duration metric: took 192.148401ms to wait for apiserver process to appear ...
	I0223 14:43:54.129248   27151 api_server.go:87] waiting for apiserver healthz status ...
	I0223 14:43:54.129264   27151 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59967/healthz ...
	I0223 14:43:54.134865   27151 api_server.go:278] https://127.0.0.1:59967/healthz returned 200:
	ok
	I0223 14:43:54.136314   27151 api_server.go:140] control plane version: v1.26.1
	I0223 14:43:54.136324   27151 api_server.go:130] duration metric: took 7.069025ms to wait for apiserver health ...
	I0223 14:43:54.136328   27151 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 14:43:54.140785   27151 system_pods.go:59] 5 kube-system pods found
	I0223 14:43:54.140802   27151 system_pods.go:61] "etcd-kubernetes-upgrade-880000" [89c5fc1c-f49a-47b2-9cfc-53ccb522f4e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0223 14:43:54.140810   27151 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-880000" [568459a1-1148-4359-8ec4-40c3242a7cf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 14:43:54.140818   27151 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-880000" [6aeb5187-83b3-4c2c-ba9b-4e106541995b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 14:43:54.140823   27151 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-880000" [5d61eece-f524-4081-8254-fac09879bbd8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0223 14:43:54.140827   27151 system_pods.go:61] "storage-provisioner" [d32fb6ff-7b0b-4d71-8f2a-cf3bdbfe7b53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0223 14:43:54.140832   27151 system_pods.go:74] duration metric: took 4.494042ms to wait for pod list to return data ...
	I0223 14:43:54.140839   27151 kubeadm.go:578] duration metric: took 203.769965ms to wait for : map[apiserver:true system_pods:true] ...
	I0223 14:43:54.140848   27151 node_conditions.go:102] verifying NodePressure condition ...
	I0223 14:43:54.144091   27151 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 14:43:54.144103   27151 node_conditions.go:123] node cpu capacity is 6
	I0223 14:43:54.144114   27151 node_conditions.go:105] duration metric: took 3.263497ms to run NodePressure ...
	I0223 14:43:54.144122   27151 start.go:228] waiting for startup goroutines ...
	I0223 14:43:54.188295   27151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59968 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/kubernetes-upgrade-880000/id_rsa Username:docker}
	I0223 14:43:54.220137   27151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 14:43:54.294016   27151 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 14:43:55.043266   27151 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0223 14:43:55.079846   27151 addons.go:492] enable addons completed in 1.149855425s: enabled=[storage-provisioner default-storageclass]
	I0223 14:43:55.079897   27151 start.go:233] waiting for cluster config update ...
	I0223 14:43:55.079918   27151 start.go:242] writing updated cluster config ...
	I0223 14:43:55.080381   27151 ssh_runner.go:195] Run: rm -f paused
	I0223 14:43:55.119638   27151 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 14:43:55.140954   27151 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-880000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 22:38:54 UTC, end at Thu 2023-02-23 22:43:56 UTC. --
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.775537537Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.775583202Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.775601600Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.775632095Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.775650726Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.775663554Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.775717781Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.775982933Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.776048157Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.776468727Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.796184486Z" level=info msg="Loading containers: start."
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.893732573Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.927778031Z" level=info msg="Loading containers: done."
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.936186081Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.936255269Z" level=info msg="Daemon has completed initialization"
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.958662062Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 22:43:41 kubernetes-upgrade-880000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.962324082Z" level=info msg="API listen on [::]:2376"
	Feb 23 22:43:41 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:41.965442699Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 22:43:47 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:47.578824325Z" level=info msg="ignoring event" container=da67f799e6d5268f0374ac9b3e19d7f47a1d9403732992ca0eb31b93e078ea94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:43:47 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:47.578869621Z" level=info msg="ignoring event" container=edab9f7f36bd175afb2863cc2eec560930325ecde25fc2f2f7236de21f0cd760 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:43:47 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:47.582572562Z" level=info msg="ignoring event" container=bec87bf3f3e37d9df11cec78e4e320fa39b60f70da89ffddfefee8efbe701229 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:43:47 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:47.584854064Z" level=info msg="ignoring event" container=9e6bdc5f79f9847185cec06d07d49ffe237dbf0045e413f4a27e1bbe48013119 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:43:47 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:47.595129404Z" level=info msg="ignoring event" container=05b3f347caa1d7ee63a8511b9c22d87812bc92475d1a919205904280ae6f0e08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 22:43:48 kubernetes-upgrade-880000 dockerd[12182]: time="2023-02-23T22:43:48.008279985Z" level=info msg="ignoring event" container=6180a9ab06e645842578638249b620034bd0a6ddd85c137b544b3666bbf6718f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b8166a5aed2f8       655493523f607       7 seconds ago       Running             kube-scheduler            2                   e4315512d4db7
	0525f28d8fc5c       e9c08e11b07f6       7 seconds ago       Running             kube-controller-manager   2                   fd3be0405052f
	ff8c3a0ca7644       deb04688c4a35       7 seconds ago       Running             kube-apiserver            2                   8a31ab652741c
	3c48ae57bf35f       fce326961ae2d       7 seconds ago       Running             etcd                      2                   62be414a3150c
	6180a9ab06e64       deb04688c4a35       13 seconds ago      Exited              kube-apiserver            1                   bec87bf3f3e37
	05b3f347caa1d       fce326961ae2d       13 seconds ago      Exited              etcd                      1                   edab9f7f36bd1
	d71c3481b35d8       e9c08e11b07f6       17 seconds ago      Exited              kube-controller-manager   1                   4f4f880175d07
	6de59a2e34868       655493523f607       17 seconds ago      Exited              kube-scheduler            1                   7742c82d8eec9
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-880000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-880000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0
	                    minikube.k8s.io/name=kubernetes-upgrade-880000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T14_43_33_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 22:43:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-880000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 22:43:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 22:43:52 +0000   Thu, 23 Feb 2023 22:43:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 22:43:52 +0000   Thu, 23 Feb 2023 22:43:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 22:43:52 +0000   Thu, 23 Feb 2023 22:43:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 22:43:52 +0000   Thu, 23 Feb 2023 22:43:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-880000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    5486766d-d32d-40b6-9600-b780b0c83991
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-880000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         24s
	  kube-system                 kube-apiserver-kubernetes-upgrade-880000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-880000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  kube-system                 kube-scheduler-kubernetes-upgrade-880000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  29s (x4 over 29s)  kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x4 over 29s)  kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x3 over 29s)  kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasSufficientPID
	  Normal  Starting                 23s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  23s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23s                kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s                kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s                kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18s                kubelet  Node kubernetes-upgrade-880000 status is now: NodeReady
	  Normal  Starting                 7s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x6 over 7s)    kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x6 over 7s)    kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x6 over 7s)    kubelet  Node kubernetes-upgrade-880000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000066] FS-Cache: O-key=[8] '7136580500000000'
	[  +0.000050] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000051] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=0000000032a0fa48
	[  +0.000163] FS-Cache: N-key=[8] '7136580500000000'
	[  +0.002658] FS-Cache: Duplicate cookie detected
	[  +0.000052] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000050] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=000000008e7781d6
	[  +0.000070] FS-Cache: O-key=[8] '7136580500000000'
	[  +0.000028] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000113] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000004fece264
	[  +0.000061] FS-Cache: N-key=[8] '7136580500000000'
	[Feb23 22:08] FS-Cache: Duplicate cookie detected
	[  +0.000034] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000058] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=000000006ea4f74a
	[  +0.000063] FS-Cache: O-key=[8] '7036580500000000'
	[  +0.000034] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000041] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000000a668217
	[  +0.000066] FS-Cache: N-key=[8] '7036580500000000'
	[  +0.413052] FS-Cache: Duplicate cookie detected
	[  +0.000113] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000056] FS-Cache: O-cookie d=00000000f0b26649{9p.inode} n=00000000634601b2
	[  +0.000097] FS-Cache: O-key=[8] '7736580500000000'
	[  +0.000045] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000055] FS-Cache: N-cookie d=00000000f0b26649{9p.inode} n=000000004fece264
	[  +0.000089] FS-Cache: N-key=[8] '7736580500000000'
	
	* 
	* ==> etcd [05b3f347caa1] <==
	* {"level":"info","ts":"2023-02-23T22:43:43.408Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T22:43:43.408Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-23T22:43:43.408Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-23T22:43:43.409Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T22:43:43.409Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T22:43:44.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-23T22:43:44.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-23T22:43:44.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-02-23T22:43:44.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-02-23T22:43:44.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-02-23T22:43:44.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-02-23T22:43:44.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-02-23T22:43:44.799Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-880000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:43:44.799Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:43:44.799Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:43:44.799Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:43:44.799Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:43:44.800Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T22:43:44.800Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-02-23T22:43:47.521Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-23T22:43:47.521Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-880000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-02-23T22:43:47.534Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-02-23T22:43:47.536Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-23T22:43:47.537Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-23T22:43:47.537Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-880000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [3c48ae57bf35] <==
	* {"level":"info","ts":"2023-02-23T22:43:50.090Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T22:43:50.090Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-23T22:43:50.090Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-02-23T22:43:50.090Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-02-23T22:43:50.090Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:43:50.090Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T22:43:50.091Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T22:43:50.091Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-23T22:43:50.091Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-23T22:43:50.091Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T22:43:50.091Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T22:43:51.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-23T22:43:51.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-23T22:43:51.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-02-23T22:43:51.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-02-23T22:43:51.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-02-23T22:43:51.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-02-23T22:43:51.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-02-23T22:43:51.117Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-880000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T22:43:51.117Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:43:51.117Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T22:43:51.118Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T22:43:51.119Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T22:43:51.120Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T22:43:51.121Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:43:57 up  2:13,  0 users,  load average: 0.75, 1.15, 1.07
	Linux kubernetes-upgrade-880000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [6180a9ab06e6] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:43:47.528024       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:43:47.528054       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0223 22:43:47.528065       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [ff8c3a0ca764] <==
	* I0223 22:43:52.181079       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0223 22:43:52.181109       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0223 22:43:52.181126       1 available_controller.go:494] Starting AvailableConditionController
	I0223 22:43:52.181129       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0223 22:43:52.180746       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0223 22:43:52.183096       1 controller.go:121] Starting legacy_token_tracking_controller
	I0223 22:43:52.183129       1 shared_informer.go:273] Waiting for caches to sync for configmaps
	E0223 22:43:52.214486       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0223 22:43:52.215885       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 22:43:52.236268       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0223 22:43:52.280538       1 cache.go:39] Caches are synced for autoregister controller
	I0223 22:43:52.280622       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 22:43:52.280658       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 22:43:52.280714       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 22:43:52.281086       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 22:43:52.281130       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 22:43:52.281164       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 22:43:52.283464       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 22:43:53.003897       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 22:43:53.184297       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 22:43:53.855023       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 22:43:53.863388       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 22:43:53.883054       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 22:43:53.900066       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 22:43:53.906238       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [0525f28d8fc5] <==
	* I0223 22:43:55.726288       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0223 22:43:55.726297       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0223 22:43:55.726306       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges
	I0223 22:43:55.726350       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints
	I0223 22:43:55.726412       1 controllermanager.go:622] Started "resourcequota"
	I0223 22:43:55.726545       1 resource_quota_controller.go:277] Starting resource quota controller
	I0223 22:43:55.726621       1 shared_informer.go:273] Waiting for caches to sync for resource quota
	I0223 22:43:55.726665       1 resource_quota_monitor.go:295] QuotaMonitor running
	I0223 22:43:55.817117       1 controllermanager.go:622] Started "disruption"
	I0223 22:43:55.817143       1 disruption.go:424] Sending events to api server.
	I0223 22:43:55.817170       1 disruption.go:435] Starting disruption controller
	I0223 22:43:55.817175       1 shared_informer.go:273] Waiting for caches to sync for disruption
	I0223 22:43:55.867777       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving"
	I0223 22:43:55.867938       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0223 22:43:55.867863       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0223 22:43:55.868224       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0223 22:43:55.868243       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0223 22:43:55.868265       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
	I0223 22:43:55.868254       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0223 22:43:55.868273       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0223 22:43:55.868475       1 controllermanager.go:622] Started "csrsigning"
	I0223 22:43:55.868511       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0223 22:43:55.868280       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0223 22:43:55.868500       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I0223 22:43:55.868607       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	
	* 
	* ==> kube-controller-manager [d71c3481b35d] <==
	* I0223 22:43:40.414421       1 serving.go:348] Generated self-signed cert in-memory
	I0223 22:43:40.565250       1 controllermanager.go:182] Version: v1.26.1
	I0223 22:43:40.565291       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:43:40.566356       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0223 22:43:40.566418       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0223 22:43:40.566417       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0223 22:43:40.576620       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-scheduler [6de59a2e3486] <==
	* E0223 22:43:40.729305       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.729420       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729492       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.728851       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729520       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.729517       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729582       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.729631       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729050       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.729463       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729776       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729734       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.729783       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729850       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.67.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.729891       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.729908       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729938       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.729938       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.730004       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.67.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.730049       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.67.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0223 22:43:40.730076       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:40.730097       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0223 22:43:41.092913       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 22:43:41.092957       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0223 22:43:41.093123       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [b8166a5aed2f] <==
	* I0223 22:43:50.523620       1 serving.go:348] Generated self-signed cert in-memory
	W0223 22:43:52.196195       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0223 22:43:52.196237       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0223 22:43:52.196246       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0223 22:43:52.196250       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0223 22:43:52.212658       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0223 22:43:52.212703       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 22:43:52.213868       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0223 22:43:52.213982       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0223 22:43:52.213993       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0223 22:43:52.214013       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0223 22:43:52.314204       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 22:38:54 UTC, end at Thu 2023-02-23 22:43:58 UTC. --
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.708684   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e88ee3ac14958fadce0226ecc4f972f-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-880000\" (UID: \"6e88ee3ac14958fadce0226ecc4f972f\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.708713   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e88ee3ac14958fadce0226ecc4f972f-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-880000\" (UID: \"6e88ee3ac14958fadce0226ecc4f972f\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.708912   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/460d23926e9465bed59834ea46217d7d-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-880000\" (UID: \"460d23926e9465bed59834ea46217d7d\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.709028   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/460d23926e9465bed59834ea46217d7d-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-880000\" (UID: \"460d23926e9465bed59834ea46217d7d\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.709132   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e88ee3ac14958fadce0226ecc4f972f-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-880000\" (UID: \"6e88ee3ac14958fadce0226ecc4f972f\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.709176   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e88ee3ac14958fadce0226ecc4f972f-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-880000\" (UID: \"6e88ee3ac14958fadce0226ecc4f972f\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.709236   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/460d23926e9465bed59834ea46217d7d-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-880000\" (UID: \"460d23926e9465bed59834ea46217d7d\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.709339   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6e88ee3ac14958fadce0226ecc4f972f-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-880000\" (UID: \"6e88ee3ac14958fadce0226ecc4f972f\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.709412   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e88ee3ac14958fadce0226ecc4f972f-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-880000\" (UID: \"6e88ee3ac14958fadce0226ecc4f972f\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.709519   13421 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e88ee3ac14958fadce0226ecc4f972f-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-880000\" (UID: \"6e88ee3ac14958fadce0226ecc4f972f\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-880000"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.824715   13421 scope.go:115] "RemoveContainer" containerID="05b3f347caa1d7ee63a8511b9c22d87812bc92475d1a919205904280ae6f0e08"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.832791   13421 scope.go:115] "RemoveContainer" containerID="6180a9ab06e645842578638249b620034bd0a6ddd85c137b544b3666bbf6718f"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.840323   13421 scope.go:115] "RemoveContainer" containerID="d71c3481b35d8f01514f5eee7207b12631c8698781a9522913a50c7d06cfc06a"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:49.848912   13421 scope.go:115] "RemoveContainer" containerID="6de59a2e348689ff0b9fa460d8f04254a55e85c9702caf98be9d78396035b0a8"
	Feb 23 22:43:49 kubernetes-upgrade-880000 kubelet[13421]: E0223 22:43:49.909360   13421 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-880000?timeout=10s": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 23 22:43:50 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:50.085997   13421 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-880000"
	Feb 23 22:43:50 kubernetes-upgrade-880000 kubelet[13421]: E0223 22:43:50.086294   13421 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-880000"
	Feb 23 22:43:50 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:50.893669   13421 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-880000"
	Feb 23 22:43:52 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:52.297843   13421 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-880000"
	Feb 23 22:43:52 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:52.297940   13421 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-880000"
	Feb 23 22:43:52 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:52.298740   13421 apiserver.go:52] "Watching apiserver"
	Feb 23 22:43:52 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:52.307100   13421 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 23 22:43:52 kubernetes-upgrade-880000 kubelet[13421]: I0223 22:43:52.330523   13421 reconciler.go:41] "Reconciler: start to sync state"
	Feb 23 22:43:52 kubernetes-upgrade-880000 kubelet[13421]: E0223 22:43:52.706570   13421 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-880000\" already exists" pod="kube-system/etcd-kubernetes-upgrade-880000"
	Feb 23 22:43:52 kubernetes-upgrade-880000 kubelet[13421]: E0223 22:43:52.902749   13421 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-880000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-880000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-880000 -n kubernetes-upgrade-880000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-880000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-880000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-880000 describe pod storage-provisioner: exit status 1 (51.078528ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-880000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-880000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-880000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-880000: (2.739945884s)
--- FAIL: TestKubernetesUpgrade (562.34s)

                                                
                                    
x
+
TestMissingContainerUpgrade (75.35s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.3034331854.exe start -p missing-upgrade-960000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.3034331854.exe start -p missing-upgrade-960000 --memory=2200 --driver=docker : exit status 78 (59.046289456s)

                                                
                                                
-- stdout --
	! [missing-upgrade-960000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-960000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-960000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.21 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.58 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.89 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 20.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 27.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.01 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 46.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.39 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.81 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 83.09 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.51 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 98.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 112.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 112.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 131.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 138.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 151.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 156.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 178.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 180.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 184.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 206.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 212.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 220.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 228.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 254.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 273.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 278.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 291.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 305.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 310.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 343.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 349.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 356.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 373.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 379.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 384.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 413.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 415.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 420.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 422.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay
2-amd64.tar.lz4: 425.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 426.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 431.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 433.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 440.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 442.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 445.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-over
lay2-amd64.tar.lz4: 448.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 451.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 455.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 459.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 464.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 469.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 487.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 495.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 523.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-o
verlay2-amd64.tar.lz4: 535.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:34:43.998901805 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-960000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:34:55.074979985 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.3034331854.exe start -p missing-upgrade-960000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.3034331854.exe start -p missing-upgrade-960000 --memory=2200 --driver=docker : exit status 70 (3.946700171s)

                                                
                                                
-- stdout --
	* [missing-upgrade-960000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-960000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-960000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.3034331854.exe start -p missing-upgrade-960000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.3034331854.exe start -p missing-upgrade-960000 --memory=2200 --driver=docker : exit status 70 (3.994510232s)

                                                
                                                
-- stdout --
	* [missing-upgrade-960000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-960000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-960000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-02-23 14:35:07.924465 -0800 PST m=+2196.005002795
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-960000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-960000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e891937b6f17b8a9f0ff95d4186b118940a0908913c5c3fe45aabd1019cbd2a3",
	        "Created": "2023-02-23T22:34:52.738184751Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 155887,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:34:52.959670354Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/e891937b6f17b8a9f0ff95d4186b118940a0908913c5c3fe45aabd1019cbd2a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e891937b6f17b8a9f0ff95d4186b118940a0908913c5c3fe45aabd1019cbd2a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/e891937b6f17b8a9f0ff95d4186b118940a0908913c5c3fe45aabd1019cbd2a3/hosts",
	        "LogPath": "/var/lib/docker/containers/e891937b6f17b8a9f0ff95d4186b118940a0908913c5c3fe45aabd1019cbd2a3/e891937b6f17b8a9f0ff95d4186b118940a0908913c5c3fe45aabd1019cbd2a3-json.log",
	        "Name": "/missing-upgrade-960000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-960000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/889aedd71bb03fc45291db35f4e5b286c35eb5a4cf8205e5c7908bd392ae5d6e-init/diff:/var/lib/docker/overlay2/fe163f75d15c1600a86cbda939dc828f32e948a7ba80a10851cf76d1ccad977b/diff:/var/lib/docker/overlay2/bc89f9b633d0df79d775606c72e2b551d9362eee940b481aebffb8273e8a489d/diff:/var/lib/docker/overlay2/57e0af661a5f2fa2ae19898701b2cc8814c0dcf8d09b930829d352647ee3e589/diff:/var/lib/docker/overlay2/9f5009bd56682aeeddfbc59aecc89f04f13bc1b37dbf1ca06fc540d5cba93991/diff:/var/lib/docker/overlay2/7dc8d3304a4fe44e8d19be3bdbe4f47caf8385a4d22e2a9bbd2774da894e7bdd/diff:/var/lib/docker/overlay2/029fd9baa1c4cdcd0a43240a6eadb0e7f4d1421b1d2434fdd87df54f675baf11/diff:/var/lib/docker/overlay2/5829b2c789886a5cd39008c129b46e73f7822a1473abea669637b6bd0efe68e3/diff:/var/lib/docker/overlay2/215a98184a6fa615cf1cc848d59bac9d2ac965359281f0a133d575bc7517d495/diff:/var/lib/docker/overlay2/3bde475daae19a9a6f1d3f4c372cd4b0c6d5f52432bcf09ad14ed428b62a6b95/diff:/var/lib/docker/overlay2/34e6b4
6412104b179a789c1ace1c521f89aaeb25c46bcf84e241b0808ddb923a/diff:/var/lib/docker/overlay2/7c536fcf86d065c285b0ec5a1f285af313f5a15ff977306e6e2cbba95fdc64f7/diff:/var/lib/docker/overlay2/a5bc9269cf95ad2bf297949ae6146b5e75680c1c17b5920b9de16fcec458310d/diff:/var/lib/docker/overlay2/4c4cd194559d13662a7e8531a58939ec2f267d8bff017a39654d075f9b2b880b/diff:/var/lib/docker/overlay2/6cdc854178c07262b5702fcbd3831af9eb85d9c03b0fbe1de673fec75d0969f1/diff:/var/lib/docker/overlay2/3c937187f815d9743ba04c27b3add3e4446625932e5f3996a7effea0c83d1587/diff:/var/lib/docker/overlay2/6ac7243a6fc041dd3d75ed91cc6bf9f0ec556757a168d97780aa6a00b7b7f23e/diff:/var/lib/docker/overlay2/e914889bbbfe1609ea740a0363c6e6ac21844aa775b4d8174565db3d75ace01f/diff:/var/lib/docker/overlay2/c4b8bd019ef4127f6d6dfdd2975426195c30e4cd6616ddd351168fcdaf91ed74/diff:/var/lib/docker/overlay2/9701172dcdfa6982ce257b348de5f29a654e8bf321d54fed773c718337e960d4/diff:/var/lib/docker/overlay2/4fe3e1ad7e3cfc88c7e5be7172e081e5bcc0b5cfb616e6d57c4917393e9ab41d/diff:/var/lib/d
ocker/overlay2/f7cadf028e1496dd2b43fa3a6f5f141f5eec9db02540deff017e1be412897e4b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/889aedd71bb03fc45291db35f4e5b286c35eb5a4cf8205e5c7908bd392ae5d6e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/889aedd71bb03fc45291db35f4e5b286c35eb5a4cf8205e5c7908bd392ae5d6e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/889aedd71bb03fc45291db35f4e5b286c35eb5a4cf8205e5c7908bd392ae5d6e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-960000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-960000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-960000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-960000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-960000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "646ed4ce30072901312b42ebb0952a9c9f5fc1369caa8b0a423cf1d233039382",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59698"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59699"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59700"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/646ed4ce3007",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "48577f7988b00f73c6be02564433493afe7ec72630e8b084c9e732781f2b5d06",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "722434272e12cbeb564120b7c0da7377c2ccab5f40fcf563a974355b96a35fdf",
	                    "EndpointID": "48577f7988b00f73c6be02564433493afe7ec72630e8b084c9e732781f2b5d06",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-960000 -n missing-upgrade-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-960000 -n missing-upgrade-960000: exit status 6 (374.600892ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:35:08.345182   24804 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-960000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-960000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-960000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-960000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-960000: (2.277047334s)
--- FAIL: TestMissingContainerUpgrade (75.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2311597475.exe start -p stopped-upgrade-757000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2311597475.exe start -p stopped-upgrade-757000 --memory=2200 --vm-driver=docker : exit status 70 (44.06696019s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-757000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig621415634
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:35:36.525482876 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-757000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:35:55.715686979 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-757000", then "minikube start -p stopped-upgrade-757000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.13 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.16 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 13.87 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.44 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 52.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 91.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 129.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 143.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 156.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 177.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 191.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 205.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 227.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 242.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 254.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 293.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 306.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 357.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 371.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 414.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 443.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 484.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:35:55.715686979 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2311597475.exe start -p stopped-upgrade-757000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2311597475.exe start -p stopped-upgrade-757000 --memory=2200 --vm-driver=docker : exit status 70 (4.480749461s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-757000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1935729899
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-757000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2311597475.exe start -p stopped-upgrade-757000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.2311597475.exe start -p stopped-upgrade-757000 --memory=2200 --vm-driver=docker : exit status 70 (4.285693809s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-757000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig494706945
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-757000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (56.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m10.04334901s)

                                                
                                                
-- stdout --
	* [old-k8s-version-919000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-919000 in cluster old-k8s-version-919000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:52:39.431113   32430 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:52:39.431276   32430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:52:39.431281   32430 out.go:309] Setting ErrFile to fd 2...
	I0223 14:52:39.431285   32430 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:52:39.431393   32430 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:52:39.432719   32430 out.go:303] Setting JSON to false
	I0223 14:52:39.451605   32430 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8533,"bootTime":1677184226,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:52:39.451734   32430 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:52:39.473576   32430 out.go:177] * [old-k8s-version-919000] minikube v1.29.0 on Darwin 13.2
	I0223 14:52:39.515434   32430 notify.go:220] Checking for updates...
	I0223 14:52:39.537622   32430 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:52:39.560631   32430 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:52:39.581424   32430 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:52:39.602621   32430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:52:39.623530   32430 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:52:39.644386   32430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:52:39.665807   32430 config.go:182] Loaded profile config "kubenet-452000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:52:39.665859   32430 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:52:39.727929   32430 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:52:39.728062   32430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:52:39.902403   32430 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 22:52:39.804630495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:52:39.944636   32430 out.go:177] * Using the docker driver based on user configuration
	I0223 14:52:39.965897   32430 start.go:296] selected driver: docker
	I0223 14:52:39.965921   32430 start.go:857] validating driver "docker" against <nil>
	I0223 14:52:39.965946   32430 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:52:39.969729   32430 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:52:40.110016   32430 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 22:52:40.018841565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:52:40.110146   32430 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 14:52:40.110325   32430 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 14:52:40.132195   32430 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 14:52:40.153716   32430 cni.go:84] Creating CNI manager for ""
	I0223 14:52:40.153755   32430 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 14:52:40.153768   32430 start_flags.go:319] config:
	{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:52:40.196705   32430 out.go:177] * Starting control plane node old-k8s-version-919000 in cluster old-k8s-version-919000
	I0223 14:52:40.217673   32430 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:52:40.227919   32430 out.go:177] * Pulling base image ...
	I0223 14:52:40.301849   32430 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:52:40.301868   32430 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 14:52:40.301956   32430 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 14:52:40.301984   32430 cache.go:57] Caching tarball of preloaded images
	I0223 14:52:40.302227   32430 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 14:52:40.302253   32430 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 14:52:40.303214   32430 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/config.json ...
	I0223 14:52:40.303349   32430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/config.json: {Name:mk1198d9bc72d8aae8620600b1c49fe68054cdad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:52:40.358701   32430 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:52:40.358719   32430 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:52:40.358740   32430 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:52:40.358797   32430 start.go:364] acquiring machines lock for old-k8s-version-919000: {Name:mk1103874b67893bb0fc52742240655036abe57b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:52:40.358949   32430 start.go:368] acquired machines lock for "old-k8s-version-919000" in 139.581µs
	I0223 14:52:40.358985   32430 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-919000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 14:52:40.359090   32430 start.go:125] createHost starting for "" (driver="docker")
	I0223 14:52:40.380709   32430 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 14:52:40.381150   32430 start.go:159] libmachine.API.Create for "old-k8s-version-919000" (driver="docker")
	I0223 14:52:40.381187   32430 client.go:168] LocalClient.Create starting
	I0223 14:52:40.381397   32430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem
	I0223 14:52:40.381492   32430 main.go:141] libmachine: Decoding PEM data...
	I0223 14:52:40.381536   32430 main.go:141] libmachine: Parsing certificate...
	I0223 14:52:40.381667   32430 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem
	I0223 14:52:40.381731   32430 main.go:141] libmachine: Decoding PEM data...
	I0223 14:52:40.381747   32430 main.go:141] libmachine: Parsing certificate...
	I0223 14:52:40.382565   32430 cli_runner.go:164] Run: docker network inspect old-k8s-version-919000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 14:52:40.438524   32430 cli_runner.go:211] docker network inspect old-k8s-version-919000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 14:52:40.438627   32430 network_create.go:281] running [docker network inspect old-k8s-version-919000] to gather additional debugging logs...
	I0223 14:52:40.438649   32430 cli_runner.go:164] Run: docker network inspect old-k8s-version-919000
	W0223 14:52:40.492791   32430 cli_runner.go:211] docker network inspect old-k8s-version-919000 returned with exit code 1
	I0223 14:52:40.492820   32430 network_create.go:284] error running [docker network inspect old-k8s-version-919000]: docker network inspect old-k8s-version-919000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-919000
	I0223 14:52:40.492836   32430 network_create.go:286] output of [docker network inspect old-k8s-version-919000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-919000
	
	** /stderr **
	I0223 14:52:40.492938   32430 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 14:52:40.549524   32430 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 14:52:40.549865   32430 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001401b60}
	I0223 14:52:40.549879   32430 network_create.go:123] attempt to create docker network old-k8s-version-919000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 14:52:40.549942   32430 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-919000 old-k8s-version-919000
	W0223 14:52:40.605268   32430 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-919000 old-k8s-version-919000 returned with exit code 1
	W0223 14:52:40.605302   32430 network_create.go:148] failed to create docker network old-k8s-version-919000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-919000 old-k8s-version-919000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 14:52:40.605316   32430 network_create.go:115] failed to create docker network old-k8s-version-919000 192.168.58.0/24, will retry: subnet is taken
	I0223 14:52:40.606686   32430 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 14:52:40.607002   32430 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ecb3d0}
	I0223 14:52:40.607015   32430 network_create.go:123] attempt to create docker network old-k8s-version-919000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 14:52:40.607092   32430 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-919000 old-k8s-version-919000
	I0223 14:52:40.695304   32430 network_create.go:107] docker network old-k8s-version-919000 192.168.67.0/24 created
	I0223 14:52:40.695338   32430 kic.go:117] calculated static IP "192.168.67.2" for the "old-k8s-version-919000" container
	I0223 14:52:40.695460   32430 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 14:52:40.752644   32430 cli_runner.go:164] Run: docker volume create old-k8s-version-919000 --label name.minikube.sigs.k8s.io=old-k8s-version-919000 --label created_by.minikube.sigs.k8s.io=true
	I0223 14:52:40.808060   32430 oci.go:103] Successfully created a docker volume old-k8s-version-919000
	I0223 14:52:40.808195   32430 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-919000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-919000 --entrypoint /usr/bin/test -v old-k8s-version-919000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 14:52:41.253209   32430 oci.go:107] Successfully prepared a docker volume old-k8s-version-919000
	I0223 14:52:41.253243   32430 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 14:52:41.253260   32430 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 14:52:41.253358   32430 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-919000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 14:52:46.908882   32430 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-919000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (5.655294557s)
	I0223 14:52:46.908905   32430 kic.go:199] duration metric: took 5.655481 seconds to extract preloaded images to volume
	I0223 14:52:46.909018   32430 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 14:52:47.054050   32430 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-919000 --name old-k8s-version-919000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-919000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-919000 --network old-k8s-version-919000 --ip 192.168.67.2 --volume old-k8s-version-919000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 14:52:47.402867   32430 cli_runner.go:164] Run: docker container inspect old-k8s-version-919000 --format={{.State.Running}}
	I0223 14:52:47.462729   32430 cli_runner.go:164] Run: docker container inspect old-k8s-version-919000 --format={{.State.Status}}
	I0223 14:52:47.522800   32430 cli_runner.go:164] Run: docker exec old-k8s-version-919000 stat /var/lib/dpkg/alternatives/iptables
	I0223 14:52:47.629111   32430 oci.go:144] the created container "old-k8s-version-919000" has a running status.
	I0223 14:52:47.629151   32430 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa...
	I0223 14:52:47.739665   32430 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 14:52:47.900573   32430 cli_runner.go:164] Run: docker container inspect old-k8s-version-919000 --format={{.State.Status}}
	I0223 14:52:47.970228   32430 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 14:52:47.970255   32430 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-919000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 14:52:48.082149   32430 cli_runner.go:164] Run: docker container inspect old-k8s-version-919000 --format={{.State.Status}}
	I0223 14:52:48.145203   32430 machine.go:88] provisioning docker machine ...
	I0223 14:52:48.145257   32430 ubuntu.go:169] provisioning hostname "old-k8s-version-919000"
	I0223 14:52:48.145379   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:48.212013   32430 main.go:141] libmachine: Using SSH client type: native
	I0223 14:52:48.212460   32430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62097 <nil> <nil>}
	I0223 14:52:48.212480   32430 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919000 && echo "old-k8s-version-919000" | sudo tee /etc/hostname
	I0223 14:52:48.370569   32430 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919000
	
	I0223 14:52:48.370686   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:48.436912   32430 main.go:141] libmachine: Using SSH client type: native
	I0223 14:52:48.437282   32430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62097 <nil> <nil>}
	I0223 14:52:48.437295   32430 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:52:48.573405   32430 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:52:48.573440   32430 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:52:48.573470   32430 ubuntu.go:177] setting up certificates
	I0223 14:52:48.573476   32430 provision.go:83] configureAuth start
	I0223 14:52:48.573551   32430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-919000
	I0223 14:52:48.637083   32430 provision.go:138] copyHostCerts
	I0223 14:52:48.637187   32430 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:52:48.637197   32430 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:52:48.637309   32430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:52:48.637500   32430 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:52:48.637507   32430 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:52:48.637580   32430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:52:48.637747   32430 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:52:48.637753   32430 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:52:48.637822   32430 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:52:48.637946   32430 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-919000]
	I0223 14:52:48.732495   32430 provision.go:172] copyRemoteCerts
	I0223 14:52:48.732561   32430 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:52:48.732617   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:48.800182   32430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62097 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:52:48.897431   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:52:48.923817   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0223 14:52:48.947292   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 14:52:48.969499   32430 provision.go:86] duration metric: configureAuth took 395.999881ms
	I0223 14:52:48.969514   32430 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:52:48.969683   32430 config.go:182] Loaded profile config "old-k8s-version-919000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 14:52:48.969744   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:49.035347   32430 main.go:141] libmachine: Using SSH client type: native
	I0223 14:52:49.035787   32430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62097 <nil> <nil>}
	I0223 14:52:49.035816   32430 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:52:49.168398   32430 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:52:49.168411   32430 ubuntu.go:71] root file system type: overlay
	I0223 14:52:49.168494   32430 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:52:49.168582   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:49.235073   32430 main.go:141] libmachine: Using SSH client type: native
	I0223 14:52:49.235438   32430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62097 <nil> <nil>}
	I0223 14:52:49.235490   32430 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:52:49.378324   32430 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:52:49.378411   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:49.460875   32430 main.go:141] libmachine: Using SSH client type: native
	I0223 14:52:49.461242   32430 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62097 <nil> <nil>}
	I0223 14:52:49.461258   32430 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:52:50.231373   32430 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 22:52:49.374491755 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 14:52:50.231397   32430 machine.go:91] provisioned docker machine in 2.086103043s
	I0223 14:52:50.231404   32430 client.go:171] LocalClient.Create took 9.84992635s
	I0223 14:52:50.231422   32430 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-919000" took 9.849988324s
	I0223 14:52:50.231434   32430 start.go:300] post-start starting for "old-k8s-version-919000" (driver="docker")
	I0223 14:52:50.231440   32430 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:52:50.231536   32430 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:52:50.231603   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:50.296889   32430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62097 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:52:50.392588   32430 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:52:50.396770   32430 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:52:50.396788   32430 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:52:50.396800   32430 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:52:50.396805   32430 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:52:50.396816   32430 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:52:50.396915   32430 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:52:50.397091   32430 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:52:50.397315   32430 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:52:50.405426   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:52:50.426644   32430 start.go:303] post-start completed in 195.194102ms
	I0223 14:52:50.427202   32430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-919000
	I0223 14:52:50.488702   32430 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/config.json ...
	I0223 14:52:50.489153   32430 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:52:50.489208   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:50.553273   32430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62097 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:52:50.643080   32430 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:52:50.647735   32430 start.go:128] duration metric: createHost completed in 10.288336174s
	I0223 14:52:50.647757   32430 start.go:83] releasing machines lock for "old-k8s-version-919000", held for 10.288503047s
	I0223 14:52:50.647851   32430 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-919000
	I0223 14:52:50.707122   32430 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 14:52:50.707123   32430 ssh_runner.go:195] Run: cat /version.json
	I0223 14:52:50.707221   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:50.707244   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:50.771087   32430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62097 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:52:50.772513   32430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62097 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:52:51.066375   32430 ssh_runner.go:195] Run: systemctl --version
	I0223 14:52:51.072080   32430 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 14:52:51.077490   32430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 14:52:51.099222   32430 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 14:52:51.099293   32430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 14:52:51.113674   32430 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 14:52:51.122186   32430 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 14:52:51.122206   32430 start.go:485] detecting cgroup driver to use...
	I0223 14:52:51.122218   32430 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:52:51.122300   32430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:52:51.135930   32430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 14:52:51.144433   32430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:52:51.152868   32430 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:52:51.152927   32430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:52:51.161368   32430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:52:51.170281   32430 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:52:51.178618   32430 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:52:51.186889   32430 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:52:51.194578   32430 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:52:51.203017   32430 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:52:51.210291   32430 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:52:51.217445   32430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:52:51.288947   32430 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:52:51.359519   32430 start.go:485] detecting cgroup driver to use...
	I0223 14:52:51.359539   32430 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:52:51.359598   32430 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:52:51.370179   32430 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:52:51.370249   32430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:52:51.380482   32430 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:52:51.394630   32430 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:52:51.461609   32430 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:52:51.553760   32430 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:52:51.553778   32430 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:52:51.568020   32430 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:52:51.651803   32430 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:52:51.877800   32430 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:52:51.903791   32430 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:52:51.956159   32430 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0223 14:52:51.956268   32430 cli_runner.go:164] Run: docker exec -t old-k8s-version-919000 dig +short host.docker.internal
	I0223 14:52:52.068331   32430 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:52:52.068439   32430 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:52:52.072634   32430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:52:52.082701   32430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:52:52.140954   32430 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 14:52:52.141045   32430 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:52:52.161804   32430 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 14:52:52.161821   32430 docker.go:560] Images already preloaded, skipping extraction
	I0223 14:52:52.161898   32430 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:52:52.182412   32430 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 14:52:52.182429   32430 cache_images.go:84] Images are preloaded, skipping loading
	I0223 14:52:52.182524   32430 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:52:52.207940   32430 cni.go:84] Creating CNI manager for ""
	I0223 14:52:52.207961   32430 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 14:52:52.207993   32430 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:52:52.208015   32430 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919000 NodeName:old-k8s-version-919000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:52:52.208146   32430 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-919000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-919000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:52:52.208234   32430 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-919000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:52:52.208301   32430 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 14:52:52.216117   32430 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:52:52.216182   32430 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 14:52:52.224062   32430 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0223 14:52:52.237096   32430 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 14:52:52.249964   32430 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0223 14:52:52.262695   32430 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:52:52.266486   32430 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:52:52.276954   32430 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000 for IP: 192.168.67.2
	I0223 14:52:52.276972   32430 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:52:52.277162   32430 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:52:52.277226   32430 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:52:52.277273   32430 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/client.key
	I0223 14:52:52.277285   32430 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/client.crt with IP's: []
	I0223 14:52:52.516820   32430 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/client.crt ...
	I0223 14:52:52.516835   32430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/client.crt: {Name:mk4729da49f295ebed842cbb7324ce0cd3983ac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:52:52.517118   32430 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/client.key ...
	I0223 14:52:52.517125   32430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/client.key: {Name:mkc2fcc739f19cfceab502db0ed3656e8e828956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:52:52.517330   32430 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.key.c7fa3a9e
	I0223 14:52:52.517345   32430 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 14:52:52.772666   32430 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.crt.c7fa3a9e ...
	I0223 14:52:52.772684   32430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.crt.c7fa3a9e: {Name:mk52b8d2c6a741f4be45589c804e894bac175559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:52:52.772977   32430 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.key.c7fa3a9e ...
	I0223 14:52:52.772986   32430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.key.c7fa3a9e: {Name:mk606c4d2d67cc68564379563d31a8083b34c554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:52:52.773161   32430 certs.go:333] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.crt
	I0223 14:52:52.773317   32430 certs.go:337] copying /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.key
	I0223 14:52:52.773460   32430 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.key
	I0223 14:52:52.773474   32430 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.crt with IP's: []
	I0223 14:52:52.891922   32430 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.crt ...
	I0223 14:52:52.891933   32430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.crt: {Name:mkca411959b8fc2747c03aec93cb7e507d2c926c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:52:52.892178   32430 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.key ...
	I0223 14:52:52.892190   32430 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.key: {Name:mk7a4c783c64f1c5ccbb59f7ca256b54763f19fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:52:52.892547   32430 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:52:52.892593   32430 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:52:52.892628   32430 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:52:52.892671   32430 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:52:52.892705   32430 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:52:52.892737   32430 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:52:52.892813   32430 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:52:52.893274   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 14:52:52.911517   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 14:52:52.928581   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 14:52:52.945802   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 14:52:52.963694   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:52:52.981122   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:52:52.998476   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:52:53.015672   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:52:53.033013   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:52:53.050287   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:52:53.067607   32430 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:52:53.084663   32430 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 14:52:53.097869   32430 ssh_runner.go:195] Run: openssl version
	I0223 14:52:53.103683   32430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:52:53.112039   32430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:52:53.116059   32430 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:52:53.116112   32430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:52:53.121627   32430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:52:53.129745   32430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:52:53.137859   32430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:52:53.142106   32430 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:52:53.142153   32430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:52:53.147652   32430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:52:53.155750   32430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:52:53.163809   32430 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:52:53.167952   32430 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:52:53.167998   32430 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:52:53.173810   32430 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:52:53.181719   32430 kubeadm.go:401] StartCluster: {Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:52:53.181838   32430 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:52:53.201678   32430 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 14:52:53.209702   32430 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 14:52:53.217222   32430 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 14:52:53.217282   32430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:52:53.225035   32430 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:52:53.225069   32430 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 14:52:53.272936   32430 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 14:52:53.272980   32430 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 14:52:53.441002   32430 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:52:53.441091   32430 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:52:53.441165   32430 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:52:53.594386   32430 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:52:53.595084   32430 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:52:53.601229   32430 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 14:52:53.669660   32430 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:52:53.711967   32430 out.go:204]   - Generating certificates and keys ...
	I0223 14:52:53.712101   32430 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 14:52:53.712188   32430 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 14:52:53.753918   32430 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 14:52:53.840933   32430 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 14:52:53.897455   32430 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 14:52:54.059363   32430 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 14:52:54.131936   32430 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 14:52:54.132053   32430 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-919000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0223 14:52:54.283583   32430 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 14:52:54.283701   32430 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-919000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0223 14:52:54.370331   32430 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 14:52:54.480788   32430 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 14:52:54.515290   32430 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 14:52:54.515364   32430 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:52:54.578658   32430 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:52:54.899273   32430 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:52:54.943311   32430 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:52:55.023741   32430 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:52:55.024217   32430 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:52:55.045930   32430 out.go:204]   - Booting up control plane ...
	I0223 14:52:55.046121   32430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:52:55.046241   32430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:52:55.046365   32430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:52:55.046470   32430 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:52:55.046722   32430 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:53:35.033928   32430 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 14:53:35.034459   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:53:35.034676   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:53:40.035363   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:53:40.035578   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:53:50.036459   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:53:50.036678   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:54:10.037838   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:54:10.038057   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:54:50.040348   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:54:50.040578   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:54:50.040590   32430 kubeadm.go:322] 
	I0223 14:54:50.040629   32430 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 14:54:50.040681   32430 kubeadm.go:322] 	timed out waiting for the condition
	I0223 14:54:50.040692   32430 kubeadm.go:322] 
	I0223 14:54:50.040730   32430 kubeadm.go:322] This error is likely caused by:
	I0223 14:54:50.040763   32430 kubeadm.go:322] 	- The kubelet is not running
	I0223 14:54:50.040917   32430 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 14:54:50.040932   32430 kubeadm.go:322] 
	I0223 14:54:50.041095   32430 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 14:54:50.041139   32430 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 14:54:50.041172   32430 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 14:54:50.041179   32430 kubeadm.go:322] 
	I0223 14:54:50.041311   32430 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 14:54:50.041417   32430 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 14:54:50.041515   32430 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 14:54:50.041587   32430 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 14:54:50.041692   32430 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 14:54:50.041733   32430 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 14:54:50.044081   32430 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 14:54:50.044149   32430 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 14:54:50.044255   32430 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 14:54:50.044341   32430 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:54:50.044416   32430 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 14:54:50.044479   32430 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 14:54:50.044632   32430 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-919000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-919000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-919000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-919000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 14:54:50.044663   32430 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 14:54:50.476780   32430 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:54:50.486504   32430 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 14:54:50.486570   32430 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:54:50.494012   32430 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 14:54:50.494034   32430 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 14:54:50.540999   32430 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 14:54:50.541040   32430 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 14:54:50.703953   32430 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 14:54:50.704061   32430 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 14:54:50.704141   32430 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 14:54:50.856189   32430 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 14:54:50.856918   32430 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 14:54:50.863467   32430 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 14:54:50.935375   32430 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 14:54:50.977798   32430 out.go:204]   - Generating certificates and keys ...
	I0223 14:54:50.977893   32430 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 14:54:50.977975   32430 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 14:54:50.978070   32430 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 14:54:50.978122   32430 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 14:54:50.978178   32430 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 14:54:50.978239   32430 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 14:54:50.978305   32430 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 14:54:50.978364   32430 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 14:54:50.978426   32430 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 14:54:50.978510   32430 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 14:54:50.978555   32430 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 14:54:50.978610   32430 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 14:54:51.354698   32430 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 14:54:51.549943   32430 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 14:54:51.770655   32430 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 14:54:51.849426   32430 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 14:54:51.849997   32430 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 14:54:51.871383   32430 out.go:204]   - Booting up control plane ...
	I0223 14:54:51.871505   32430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 14:54:51.871583   32430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 14:54:51.871646   32430 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 14:54:51.871734   32430 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 14:54:51.871919   32430 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 14:55:31.859990   32430 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 14:55:31.861077   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:55:31.861281   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:55:36.862645   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:55:36.862867   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:55:46.863876   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:55:46.864112   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:56:06.866414   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:56:06.866651   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:56:46.868566   32430 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 14:56:46.868738   32430 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 14:56:46.868749   32430 kubeadm.go:322] 
	I0223 14:56:46.868782   32430 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 14:56:46.868830   32430 kubeadm.go:322] 	timed out waiting for the condition
	I0223 14:56:46.868851   32430 kubeadm.go:322] 
	I0223 14:56:46.868921   32430 kubeadm.go:322] This error is likely caused by:
	I0223 14:56:46.868990   32430 kubeadm.go:322] 	- The kubelet is not running
	I0223 14:56:46.869153   32430 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 14:56:46.869172   32430 kubeadm.go:322] 
	I0223 14:56:46.869272   32430 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 14:56:46.869299   32430 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 14:56:46.869324   32430 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 14:56:46.869330   32430 kubeadm.go:322] 
	I0223 14:56:46.869442   32430 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 14:56:46.869513   32430 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 14:56:46.869588   32430 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 14:56:46.869642   32430 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 14:56:46.869692   32430 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 14:56:46.869714   32430 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 14:56:46.872137   32430 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 14:56:46.872208   32430 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 14:56:46.872317   32430 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 14:56:46.872407   32430 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 14:56:46.872470   32430 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 14:56:46.872530   32430 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 14:56:46.872559   32430 kubeadm.go:403] StartCluster complete in 3m53.684090344s
	I0223 14:56:46.872650   32430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 14:56:46.893196   32430 logs.go:277] 0 containers: []
	W0223 14:56:46.893211   32430 logs.go:279] No container was found matching "kube-apiserver"
	I0223 14:56:46.893286   32430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 14:56:46.912551   32430 logs.go:277] 0 containers: []
	W0223 14:56:46.912565   32430 logs.go:279] No container was found matching "etcd"
	I0223 14:56:46.912635   32430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 14:56:46.933113   32430 logs.go:277] 0 containers: []
	W0223 14:56:46.933126   32430 logs.go:279] No container was found matching "coredns"
	I0223 14:56:46.933197   32430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 14:56:46.951895   32430 logs.go:277] 0 containers: []
	W0223 14:56:46.951911   32430 logs.go:279] No container was found matching "kube-scheduler"
	I0223 14:56:46.951988   32430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 14:56:46.973476   32430 logs.go:277] 0 containers: []
	W0223 14:56:46.973490   32430 logs.go:279] No container was found matching "kube-proxy"
	I0223 14:56:46.973560   32430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 14:56:47.000838   32430 logs.go:277] 0 containers: []
	W0223 14:56:47.000858   32430 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 14:56:47.000951   32430 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 14:56:47.020520   32430 logs.go:277] 0 containers: []
	W0223 14:56:47.020535   32430 logs.go:279] No container was found matching "kindnet"
	I0223 14:56:47.020543   32430 logs.go:123] Gathering logs for kubelet ...
	I0223 14:56:47.020551   32430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 14:56:47.058017   32430 logs.go:123] Gathering logs for dmesg ...
	I0223 14:56:47.058040   32430 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 14:56:47.070651   32430 logs.go:123] Gathering logs for describe nodes ...
	I0223 14:56:47.070665   32430 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 14:56:47.123431   32430 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 14:56:47.123442   32430 logs.go:123] Gathering logs for Docker ...
	I0223 14:56:47.123449   32430 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 14:56:47.147608   32430 logs.go:123] Gathering logs for container status ...
	I0223 14:56:47.147622   32430 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 14:56:49.194830   32430 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047136646s)
	W0223 14:56:49.194992   32430 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 14:56:49.195009   32430 out.go:239] * 
	* 
	W0223 14:56:49.195175   32430 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 14:56:49.195190   32430 out.go:239] * 
	* 
	W0223 14:56:49.195811   32430 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 14:56:49.258520   32430 out.go:177] 
	W0223 14:56:49.300332   32430 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 14:56:49.300408   32430 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 14:56:49.300452   32430 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 14:56:49.358626   32430 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-919000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-919000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f",
	        "Created": "2023-02-23T22:52:47.108009889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:52:47.393667172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hosts",
	        "LogPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f-json.log",
	        "Name": "/old-k8s-version-919000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-919000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-919000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-919000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-919000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-919000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3aa73756f04f765ad6387b630175a269d03136baccdaf5a3a9fdc4f1198b973",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62097"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62098"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62096"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a3aa73756f04",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-919000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5b30451d4570",
	                        "old-k8s-version-919000"
	                    ],
	                    "NetworkID": "c7154bbdfe1ae896999b2fd2c462dec29ff61281e64aa32aac9e788f781af78c",
	                    "EndpointID": "ebc723f5d2c72d46fce5dc0c853fb943a03ba08eeae02776867e996748135d09",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 6 (395.103971ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:56:49.914090   33597 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-919000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-919000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-919000 create -f testdata/busybox.yaml: exit status 1 (35.830899ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-919000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-919000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-919000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-919000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f",
	        "Created": "2023-02-23T22:52:47.108009889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:52:47.393667172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hosts",
	        "LogPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f-json.log",
	        "Name": "/old-k8s-version-919000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-919000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-919000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-919000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-919000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-919000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3aa73756f04f765ad6387b630175a269d03136baccdaf5a3a9fdc4f1198b973",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62097"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62098"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62096"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a3aa73756f04",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-919000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5b30451d4570",
	                        "old-k8s-version-919000"
	                    ],
	                    "NetworkID": "c7154bbdfe1ae896999b2fd2c462dec29ff61281e64aa32aac9e788f781af78c",
	                    "EndpointID": "ebc723f5d2c72d46fce5dc0c853fb943a03ba08eeae02776867e996748135d09",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 6 (398.973502ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:56:50.410466   33612 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-919000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-919000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-919000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f",
	        "Created": "2023-02-23T22:52:47.108009889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:52:47.393667172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hosts",
	        "LogPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f-json.log",
	        "Name": "/old-k8s-version-919000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-919000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-919000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-919000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-919000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-919000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3aa73756f04f765ad6387b630175a269d03136baccdaf5a3a9fdc4f1198b973",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62097"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62098"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62096"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a3aa73756f04",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-919000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5b30451d4570",
	                        "old-k8s-version-919000"
	                    ],
	                    "NetworkID": "c7154bbdfe1ae896999b2fd2c462dec29ff61281e64aa32aac9e788f781af78c",
	                    "EndpointID": "ebc723f5d2c72d46fce5dc0c853fb943a03ba08eeae02776867e996748135d09",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 6 (392.076067ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:56:50.861095   33624 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-919000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-919000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0223 14:57:03.270739   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:57:04.524051   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:04.529422   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:04.541322   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:04.561499   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:04.601655   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:04.681798   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:04.842281   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:05.163020   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:05.803138   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:06.089682   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:57:06.234350   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 14:57:07.078564   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:57:07.084503   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:09.645175   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:14.765476   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:25.007814   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:25.885169   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:57:27.677955   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
E0223 14:57:45.265835   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:45.272223   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:45.284388   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:45.305426   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:45.346067   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:45.428262   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:45.488574   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:57:45.590107   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:45.911329   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:46.551620   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:47.832731   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:50.393056   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:57:50.498293   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:57:55.450355   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
E0223 14:57:55.514329   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:58:05.754795   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:58:18.181812   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:58:22.413935   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:58:26.235594   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:58:26.449951   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-919000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.762060995s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-919000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-919000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-919000 describe deploy/metrics-server -n kube-system: exit status 1 (35.452767ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-919000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-919000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-919000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-919000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f",
	        "Created": "2023-02-23T22:52:47.108009889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274292,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:52:47.393667172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hosts",
	        "LogPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f-json.log",
	        "Name": "/old-k8s-version-919000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-919000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-919000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-919000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-919000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-919000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3aa73756f04f765ad6387b630175a269d03136baccdaf5a3a9fdc4f1198b973",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62097"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62098"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62094"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62096"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a3aa73756f04",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-919000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5b30451d4570",
	                        "old-k8s-version-919000"
	                    ],
	                    "NetworkID": "c7154bbdfe1ae896999b2fd2c462dec29ff61281e64aa32aac9e788f781af78c",
	                    "EndpointID": "ebc723f5d2c72d46fce5dc0c853fb943a03ba08eeae02776867e996748135d09",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
E0223 14:58:28.012394   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 6 (416.24125ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:58:28.138886   33739 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-919000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (496.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0223 14:58:47.807775   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:59:07.197872   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 14:59:19.429939   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:59:23.234886   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:59:47.116044   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:59:48.372887   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 14:59:50.923928   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 15:00:29.122437   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 15:00:44.170820   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 15:00:59.743715   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
E0223 15:01:03.917487   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 15:01:11.858193   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 15:01:31.653072   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 15:01:40.300533   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 15:01:49.299769   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 15:02:04.532120   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 15:02:06.242106   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 15:02:27.686006   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
E0223 15:02:32.219060   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 15:02:45.273890   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 15:02:50.508286   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 15:03:12.969390   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 15:03:22.424588   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m11.783897034s)

                                                
                                                
-- stdout --
	* [old-k8s-version-919000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-919000 in cluster old-k8s-version-919000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-919000" ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:58:30.144161   33771 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:58:30.144359   33771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:58:30.144365   33771 out.go:309] Setting ErrFile to fd 2...
	I0223 14:58:30.144369   33771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:58:30.144477   33771 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:58:30.145837   33771 out.go:303] Setting JSON to false
	I0223 14:58:30.164324   33771 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8884,"bootTime":1677184226,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:58:30.164470   33771 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:58:30.186598   33771 out.go:177] * [old-k8s-version-919000] minikube v1.29.0 on Darwin 13.2
	I0223 14:58:30.208456   33771 notify.go:220] Checking for updates...
	I0223 14:58:30.229351   33771 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:58:30.250781   33771 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:58:30.272332   33771 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:58:30.293580   33771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:58:30.314392   33771 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:58:30.335262   33771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:58:30.359182   33771 config.go:182] Loaded profile config "old-k8s-version-919000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 14:58:30.380499   33771 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0223 14:58:30.401392   33771 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:58:30.464538   33771 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:58:30.464659   33771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:58:30.606210   33771 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 22:58:30.514754782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:58:30.649715   33771 out.go:177] * Using the docker driver based on existing profile
	I0223 14:58:30.670507   33771 start.go:296] selected driver: docker
	I0223 14:58:30.670526   33771 start.go:857] validating driver "docker" against &{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:58:30.670608   33771 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:58:30.673088   33771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:58:30.817826   33771 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 22:58:30.723522669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:58:30.817977   33771 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 14:58:30.817998   33771 cni.go:84] Creating CNI manager for ""
	I0223 14:58:30.818011   33771 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 14:58:30.818021   33771 start_flags.go:319] config:
	{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:58:30.860826   33771 out.go:177] * Starting control plane node old-k8s-version-919000 in cluster old-k8s-version-919000
	I0223 14:58:30.882515   33771 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 14:58:30.903804   33771 out.go:177] * Pulling base image ...
	I0223 14:58:30.961489   33771 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 14:58:30.961492   33771 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 14:58:30.961611   33771 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 14:58:30.961642   33771 cache.go:57] Caching tarball of preloaded images
	I0223 14:58:30.962700   33771 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 14:58:30.962766   33771 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 14:58:30.963121   33771 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/config.json ...
	I0223 14:58:31.018286   33771 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 14:58:31.018305   33771 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 14:58:31.018327   33771 cache.go:193] Successfully downloaded all kic artifacts
	I0223 14:58:31.018388   33771 start.go:364] acquiring machines lock for old-k8s-version-919000: {Name:mk1103874b67893bb0fc52742240655036abe57b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 14:58:31.018512   33771 start.go:368] acquired machines lock for "old-k8s-version-919000" in 104.818µs
	I0223 14:58:31.018543   33771 start.go:96] Skipping create...Using existing machine configuration
	I0223 14:58:31.018552   33771 fix.go:55] fixHost starting: 
	I0223 14:58:31.018814   33771 cli_runner.go:164] Run: docker container inspect old-k8s-version-919000 --format={{.State.Status}}
	I0223 14:58:31.076545   33771 fix.go:103] recreateIfNeeded on old-k8s-version-919000: state=Stopped err=<nil>
	W0223 14:58:31.076575   33771 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 14:58:31.098022   33771 out.go:177] * Restarting existing docker container for "old-k8s-version-919000" ...
	I0223 14:58:31.140659   33771 cli_runner.go:164] Run: docker start old-k8s-version-919000
	I0223 14:58:31.469553   33771 cli_runner.go:164] Run: docker container inspect old-k8s-version-919000 --format={{.State.Status}}
	I0223 14:58:31.529697   33771 kic.go:426] container "old-k8s-version-919000" state is running.
	I0223 14:58:31.530288   33771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-919000
	I0223 14:58:31.592576   33771 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/config.json ...
	I0223 14:58:31.593025   33771 machine.go:88] provisioning docker machine ...
	I0223 14:58:31.593052   33771 ubuntu.go:169] provisioning hostname "old-k8s-version-919000"
	I0223 14:58:31.593145   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:31.652779   33771 main.go:141] libmachine: Using SSH client type: native
	I0223 14:58:31.653208   33771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62350 <nil> <nil>}
	I0223 14:58:31.653226   33771 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919000 && echo "old-k8s-version-919000" | sudo tee /etc/hostname
	I0223 14:58:31.803501   33771 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919000
	
	I0223 14:58:31.803608   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:31.864318   33771 main.go:141] libmachine: Using SSH client type: native
	I0223 14:58:31.864669   33771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62350 <nil> <nil>}
	I0223 14:58:31.864684   33771 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 14:58:31.997179   33771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:58:31.997201   33771 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 14:58:31.997240   33771 ubuntu.go:177] setting up certificates
	I0223 14:58:31.997247   33771 provision.go:83] configureAuth start
	I0223 14:58:31.997337   33771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-919000
	I0223 14:58:32.054631   33771 provision.go:138] copyHostCerts
	I0223 14:58:32.054738   33771 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 14:58:32.054748   33771 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 14:58:32.054848   33771 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 14:58:32.055060   33771 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 14:58:32.055067   33771 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 14:58:32.055142   33771 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 14:58:32.055292   33771 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 14:58:32.055298   33771 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 14:58:32.055360   33771 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 14:58:32.055482   33771 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-919000]
	I0223 14:58:32.192340   33771 provision.go:172] copyRemoteCerts
	I0223 14:58:32.192403   33771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 14:58:32.192461   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:32.249965   33771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62350 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:58:32.344689   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 14:58:32.361968   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0223 14:58:32.379294   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 14:58:32.396190   33771 provision.go:86] duration metric: configureAuth took 398.919595ms
	I0223 14:58:32.396205   33771 ubuntu.go:193] setting minikube options for container-runtime
	I0223 14:58:32.396363   33771 config.go:182] Loaded profile config "old-k8s-version-919000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 14:58:32.396441   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:32.454775   33771 main.go:141] libmachine: Using SSH client type: native
	I0223 14:58:32.455153   33771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62350 <nil> <nil>}
	I0223 14:58:32.455163   33771 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 14:58:32.589323   33771 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 14:58:32.589337   33771 ubuntu.go:71] root file system type: overlay
	I0223 14:58:32.589425   33771 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 14:58:32.589509   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:32.646992   33771 main.go:141] libmachine: Using SSH client type: native
	I0223 14:58:32.647347   33771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62350 <nil> <nil>}
	I0223 14:58:32.647400   33771 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 14:58:32.787201   33771 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 14:58:32.787298   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:32.845748   33771 main.go:141] libmachine: Using SSH client type: native
	I0223 14:58:32.846106   33771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62350 <nil> <nil>}
	I0223 14:58:32.846119   33771 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 14:58:32.983043   33771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 14:58:32.983066   33771 machine.go:91] provisioned docker machine in 1.389985367s
	I0223 14:58:32.983077   33771 start.go:300] post-start starting for "old-k8s-version-919000" (driver="docker")
	I0223 14:58:32.983084   33771 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 14:58:32.983158   33771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 14:58:32.983211   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:33.040735   33771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62350 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:58:33.136774   33771 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 14:58:33.140463   33771 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 14:58:33.140484   33771 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 14:58:33.140491   33771 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 14:58:33.140495   33771 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 14:58:33.140503   33771 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 14:58:33.140601   33771 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 14:58:33.140761   33771 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 14:58:33.140934   33771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 14:58:33.148252   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:58:33.165709   33771 start.go:303] post-start completed in 182.6164ms
	I0223 14:58:33.165781   33771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:58:33.165852   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:33.223621   33771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62350 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:58:33.313770   33771 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 14:58:33.318281   33771 fix.go:57] fixHost completed within 2.299659403s
	I0223 14:58:33.318301   33771 start.go:83] releasing machines lock for "old-k8s-version-919000", held for 2.299711454s
	I0223 14:58:33.318395   33771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-919000
	I0223 14:58:33.375213   33771 ssh_runner.go:195] Run: cat /version.json
	I0223 14:58:33.375237   33771 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 14:58:33.375310   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:33.375319   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:33.434734   33771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62350 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:58:33.434834   33771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62350 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/old-k8s-version-919000/id_rsa Username:docker}
	I0223 14:58:33.525828   33771 ssh_runner.go:195] Run: systemctl --version
	I0223 14:58:33.731245   33771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 14:58:33.736441   33771 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 14:58:33.736493   33771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 14:58:33.744497   33771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 14:58:33.752173   33771 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0223 14:58:33.752189   33771 start.go:485] detecting cgroup driver to use...
	I0223 14:58:33.752199   33771 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:58:33.752282   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:58:33.765810   33771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0223 14:58:33.774554   33771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 14:58:33.783184   33771 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 14:58:33.783252   33771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 14:58:33.792451   33771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:58:33.801289   33771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 14:58:33.810430   33771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 14:58:33.819249   33771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 14:58:33.827181   33771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 14:58:33.835684   33771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 14:58:33.842900   33771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 14:58:33.850288   33771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:58:33.914159   33771 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 14:58:33.984580   33771 start.go:485] detecting cgroup driver to use...
	I0223 14:58:33.984600   33771 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 14:58:33.984673   33771 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 14:58:33.995956   33771 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 14:58:33.996039   33771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 14:58:34.006228   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 14:58:34.020333   33771 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 14:58:34.117988   33771 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 14:58:34.175496   33771 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 14:58:34.175518   33771 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 14:58:34.214840   33771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 14:58:34.276557   33771 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 14:58:34.514245   33771 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:58:34.540356   33771 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 14:58:34.589250   33771 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0223 14:58:34.589338   33771 cli_runner.go:164] Run: docker exec -t old-k8s-version-919000 dig +short host.docker.internal
	I0223 14:58:34.698919   33771 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 14:58:34.699029   33771 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 14:58:34.704301   33771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:58:34.715401   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:34.773847   33771 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 14:58:34.773927   33771 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:58:34.793276   33771 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 14:58:34.793290   33771 docker.go:560] Images already preloaded, skipping extraction
	I0223 14:58:34.793360   33771 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 14:58:34.814708   33771 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0223 14:58:34.814730   33771 cache_images.go:84] Images are preloaded, skipping loading
	I0223 14:58:34.814828   33771 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 14:58:34.841594   33771 cni.go:84] Creating CNI manager for ""
	I0223 14:58:34.841615   33771 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 14:58:34.841630   33771 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 14:58:34.841646   33771 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919000 NodeName:old-k8s-version-919000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 14:58:34.841772   33771 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-919000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-919000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 14:58:34.841844   33771 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-919000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 14:58:34.841915   33771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0223 14:58:34.850012   33771 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 14:58:34.850073   33771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 14:58:34.857406   33771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0223 14:58:34.870169   33771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 14:58:34.883080   33771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0223 14:58:34.895949   33771 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0223 14:58:34.899998   33771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 14:58:34.909934   33771 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000 for IP: 192.168.67.2
	I0223 14:58:34.909952   33771 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:58:34.910108   33771 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 14:58:34.910161   33771 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 14:58:34.910292   33771 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/client.key
	I0223 14:58:34.910371   33771 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.key.c7fa3a9e
	I0223 14:58:34.910433   33771 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.key
	I0223 14:58:34.910645   33771 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 14:58:34.910682   33771 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 14:58:34.910692   33771 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 14:58:34.910727   33771 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 14:58:34.910758   33771 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 14:58:34.910793   33771 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 14:58:34.910859   33771 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 14:58:34.911422   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 14:58:34.928925   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 14:58:34.946398   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 14:58:34.965008   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/old-k8s-version-919000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 14:58:34.982809   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 14:58:35.000092   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 14:58:35.017530   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 14:58:35.034902   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 14:58:35.052323   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 14:58:35.069847   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 14:58:35.087648   33771 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 14:58:35.105041   33771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 14:58:35.117849   33771 ssh_runner.go:195] Run: openssl version
	I0223 14:58:35.123643   33771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 14:58:35.131848   33771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 14:58:35.135956   33771 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 14:58:35.135999   33771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 14:58:35.141307   33771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 14:58:35.148952   33771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 14:58:35.173847   33771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 14:58:35.178099   33771 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 14:58:35.178149   33771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 14:58:35.183593   33771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 14:58:35.191077   33771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 14:58:35.199289   33771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:58:35.203277   33771 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:58:35.203321   33771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 14:58:35.208868   33771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 14:58:35.216447   33771 kubeadm.go:401] StartCluster: {Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:58:35.216561   33771 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:58:35.237017   33771 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 14:58:35.244810   33771 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 14:58:35.244829   33771 kubeadm.go:633] restartCluster start
	I0223 14:58:35.244882   33771 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 14:58:35.251897   33771 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:35.251979   33771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-919000
	I0223 14:58:35.312795   33771 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-919000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:58:35.312960   33771 kubeconfig.go:146] "old-k8s-version-919000" context is missing from /Users/jenkins/minikube-integration/15909-14738/kubeconfig - will repair!
	I0223 14:58:35.313290   33771 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/kubeconfig: {Name:mk366c13f6069774a57c4d74123d5172c8522a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 14:58:35.314647   33771 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 14:58:35.322638   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:35.322709   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:35.331526   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:35.833659   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:35.833815   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:35.845236   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:36.333833   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:36.333962   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:36.344769   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:36.831710   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:36.831864   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:36.842703   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:37.333184   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:37.333361   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:37.344566   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:37.831869   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:37.832066   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:37.842792   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:38.332839   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:38.333003   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:38.343742   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:38.832658   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:38.832857   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:38.844108   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:39.331733   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:39.331820   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:39.341329   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:39.832118   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:39.832203   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:39.841900   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:40.332278   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:40.332420   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:40.342951   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:40.833047   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:40.833219   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:40.844198   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:41.332460   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:41.332561   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:41.342574   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:41.833865   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:41.834122   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:41.844953   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:42.332479   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:42.332554   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:42.341784   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:42.832091   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:42.832209   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:42.843258   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:43.332130   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:43.332251   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:43.342874   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:43.832267   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:43.832390   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:43.843430   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:44.331882   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:44.332070   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:44.342515   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:44.831930   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:44.831998   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:44.841223   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:45.332610   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:45.332766   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:45.343912   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:45.343926   33771 api_server.go:165] Checking apiserver status ...
	I0223 14:58:45.343990   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 14:58:45.353467   33771 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:58:45.353484   33771 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 14:58:45.353495   33771 kubeadm.go:1120] stopping kube-system containers ...
	I0223 14:58:45.353581   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 14:58:45.372981   33771 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 14:58:45.383887   33771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 14:58:45.391632   33771 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Feb 23 22:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Feb 23 22:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 23 22:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 23 22:54 /etc/kubernetes/scheduler.conf
	
	I0223 14:58:45.391695   33771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 14:58:45.399555   33771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 14:58:45.407280   33771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 14:58:45.414940   33771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 14:58:45.422845   33771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 14:58:45.430511   33771 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 14:58:45.430523   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:58:45.483260   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:58:46.253225   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:58:46.418025   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:58:46.475718   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 14:58:46.556977   33771 api_server.go:51] waiting for apiserver process to appear ...
	I0223 14:58:46.557044   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:47.065948   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:47.565920   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:48.066470   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:48.566268   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:49.067626   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:49.566051   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:50.066626   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:50.566971   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:51.066618   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:51.568267   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:52.067235   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:52.566393   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:53.066873   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:53.566751   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:54.068134   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:54.566944   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:55.067656   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:55.566284   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:56.066932   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:56.566878   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:57.066846   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:57.567710   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:58.066856   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:58.566435   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:59.066299   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:58:59.567691   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:00.067304   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:00.566663   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:01.067059   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:01.566503   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:02.067391   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:02.566573   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:03.066414   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:03.567167   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:04.067111   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:04.566845   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:05.068035   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:05.568673   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:06.066900   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:06.568142   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:07.066967   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:07.567198   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:08.067102   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:08.567557   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:09.067450   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:09.566627   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:10.068503   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:10.568545   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:11.068042   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:11.568091   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:12.067156   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:12.567332   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:13.066684   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:13.566778   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:14.068086   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:14.567464   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:15.066932   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:15.566782   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:16.067329   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:16.567800   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:17.067951   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:17.566939   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:18.066838   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:18.566877   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:19.067889   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:19.567222   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:20.068213   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:20.566961   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:21.067106   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:21.567083   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:22.068480   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:22.567684   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:23.067267   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:23.567845   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:24.067972   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:24.567332   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:25.067256   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:25.567853   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:26.067277   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:26.569181   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:27.067981   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:27.567707   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:28.067697   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:28.567302   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:29.067289   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:29.567299   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:30.068019   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:30.568002   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:31.068635   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:31.567638   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:32.068514   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:32.567383   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:33.068410   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:33.567638   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:34.068315   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:34.567797   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:35.067479   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:35.567999   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:36.067776   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:36.567874   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:37.067815   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:37.567466   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:38.067391   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:38.567412   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:39.067566   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:39.567751   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:40.068153   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:40.567656   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:41.067485   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:41.568799   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:42.068169   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:42.569640   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:43.068027   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:43.567551   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:44.068276   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:44.568063   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:45.067887   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:45.569737   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:46.067737   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:46.567633   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 14:59:46.588332   33771 logs.go:277] 0 containers: []
	W0223 14:59:46.588349   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 14:59:46.588425   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 14:59:46.609584   33771 logs.go:277] 0 containers: []
	W0223 14:59:46.609601   33771 logs.go:279] No container was found matching "etcd"
	I0223 14:59:46.609698   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 14:59:46.630787   33771 logs.go:277] 0 containers: []
	W0223 14:59:46.630814   33771 logs.go:279] No container was found matching "coredns"
	I0223 14:59:46.630920   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 14:59:46.651337   33771 logs.go:277] 0 containers: []
	W0223 14:59:46.651353   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 14:59:46.651433   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 14:59:46.671760   33771 logs.go:277] 0 containers: []
	W0223 14:59:46.671777   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 14:59:46.671861   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 14:59:46.694257   33771 logs.go:277] 0 containers: []
	W0223 14:59:46.694276   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 14:59:46.694349   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 14:59:46.713036   33771 logs.go:277] 0 containers: []
	W0223 14:59:46.713051   33771 logs.go:279] No container was found matching "kindnet"
	I0223 14:59:46.713142   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 14:59:46.736221   33771 logs.go:277] 0 containers: []
	W0223 14:59:46.736235   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 14:59:46.736243   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 14:59:46.736250   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 14:59:46.778358   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 14:59:46.778377   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 14:59:46.790645   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 14:59:46.790658   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 14:59:46.858051   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 14:59:46.858079   33771 logs.go:123] Gathering logs for Docker ...
	I0223 14:59:46.858087   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 14:59:46.879345   33771 logs.go:123] Gathering logs for container status ...
	I0223 14:59:46.879361   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 14:59:48.925648   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046215536s)
	I0223 14:59:51.427942   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:51.567903   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 14:59:51.588954   33771 logs.go:277] 0 containers: []
	W0223 14:59:51.588967   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 14:59:51.589041   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 14:59:51.607762   33771 logs.go:277] 0 containers: []
	W0223 14:59:51.607775   33771 logs.go:279] No container was found matching "etcd"
	I0223 14:59:51.607843   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 14:59:51.626492   33771 logs.go:277] 0 containers: []
	W0223 14:59:51.626510   33771 logs.go:279] No container was found matching "coredns"
	I0223 14:59:51.626585   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 14:59:51.646030   33771 logs.go:277] 0 containers: []
	W0223 14:59:51.646045   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 14:59:51.646124   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 14:59:51.666530   33771 logs.go:277] 0 containers: []
	W0223 14:59:51.666545   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 14:59:51.666616   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 14:59:51.688575   33771 logs.go:277] 0 containers: []
	W0223 14:59:51.688588   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 14:59:51.688659   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 14:59:51.707634   33771 logs.go:277] 0 containers: []
	W0223 14:59:51.707651   33771 logs.go:279] No container was found matching "kindnet"
	I0223 14:59:51.707738   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 14:59:51.727690   33771 logs.go:277] 0 containers: []
	W0223 14:59:51.727703   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 14:59:51.727711   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 14:59:51.727719   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 14:59:51.739877   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 14:59:51.739893   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 14:59:51.820728   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 14:59:51.820740   33771 logs.go:123] Gathering logs for Docker ...
	I0223 14:59:51.820748   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 14:59:51.841547   33771 logs.go:123] Gathering logs for container status ...
	I0223 14:59:51.841561   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 14:59:53.886230   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044598313s)
	I0223 14:59:53.886338   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 14:59:53.886346   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 14:59:56.425155   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:59:56.569368   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 14:59:56.591544   33771 logs.go:277] 0 containers: []
	W0223 14:59:56.591559   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 14:59:56.591633   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 14:59:56.610557   33771 logs.go:277] 0 containers: []
	W0223 14:59:56.610573   33771 logs.go:279] No container was found matching "etcd"
	I0223 14:59:56.610650   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 14:59:56.629850   33771 logs.go:277] 0 containers: []
	W0223 14:59:56.629865   33771 logs.go:279] No container was found matching "coredns"
	I0223 14:59:56.629936   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 14:59:56.649282   33771 logs.go:277] 0 containers: []
	W0223 14:59:56.649296   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 14:59:56.649367   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 14:59:56.670983   33771 logs.go:277] 0 containers: []
	W0223 14:59:56.670999   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 14:59:56.671074   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 14:59:56.691443   33771 logs.go:277] 0 containers: []
	W0223 14:59:56.691459   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 14:59:56.691530   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 14:59:56.710736   33771 logs.go:277] 0 containers: []
	W0223 14:59:56.710751   33771 logs.go:279] No container was found matching "kindnet"
	I0223 14:59:56.710831   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 14:59:56.730449   33771 logs.go:277] 0 containers: []
	W0223 14:59:56.730463   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 14:59:56.730491   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 14:59:56.730501   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 14:59:56.767960   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 14:59:56.767979   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 14:59:56.780178   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 14:59:56.780195   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 14:59:56.833918   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 14:59:56.833929   33771 logs.go:123] Gathering logs for Docker ...
	I0223 14:59:56.833936   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 14:59:56.854734   33771 logs.go:123] Gathering logs for container status ...
	I0223 14:59:56.854764   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 14:59:58.899719   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044884329s)
	I0223 15:00:01.402177   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:01.570222   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:01.589489   33771 logs.go:277] 0 containers: []
	W0223 15:00:01.589503   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:01.589575   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:01.613078   33771 logs.go:277] 0 containers: []
	W0223 15:00:01.613098   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:01.613184   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:01.637989   33771 logs.go:277] 0 containers: []
	W0223 15:00:01.638006   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:01.638094   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:01.665528   33771 logs.go:277] 0 containers: []
	W0223 15:00:01.665542   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:01.665621   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:01.686299   33771 logs.go:277] 0 containers: []
	W0223 15:00:01.686312   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:01.686386   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:01.707346   33771 logs.go:277] 0 containers: []
	W0223 15:00:01.707365   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:01.707461   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:01.738835   33771 logs.go:277] 0 containers: []
	W0223 15:00:01.738850   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:01.738956   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:01.762837   33771 logs.go:277] 0 containers: []
	W0223 15:00:01.762854   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:01.762862   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:01.762878   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:01.824322   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:01.824335   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:01.824349   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:01.851050   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:01.851079   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:03.901199   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050049476s)
	I0223 15:00:03.901305   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:03.901312   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:03.941564   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:03.941582   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:06.454433   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:06.569195   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:06.587746   33771 logs.go:277] 0 containers: []
	W0223 15:00:06.587759   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:06.587833   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:06.606936   33771 logs.go:277] 0 containers: []
	W0223 15:00:06.606955   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:06.607069   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:06.630192   33771 logs.go:277] 0 containers: []
	W0223 15:00:06.630212   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:06.630300   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:06.649617   33771 logs.go:277] 0 containers: []
	W0223 15:00:06.649642   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:06.649729   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:06.668509   33771 logs.go:277] 0 containers: []
	W0223 15:00:06.668527   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:06.668615   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:06.688951   33771 logs.go:277] 0 containers: []
	W0223 15:00:06.688965   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:06.689035   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:06.709036   33771 logs.go:277] 0 containers: []
	W0223 15:00:06.709058   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:06.709145   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:06.729566   33771 logs.go:277] 0 containers: []
	W0223 15:00:06.729583   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:06.729593   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:06.729604   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:06.745524   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:06.745552   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:06.812860   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:06.812875   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:06.812885   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:06.838069   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:06.838089   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:08.882431   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0442719s)
	I0223 15:00:08.882546   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:08.882559   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:11.425646   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:11.568420   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:11.588600   33771 logs.go:277] 0 containers: []
	W0223 15:00:11.588616   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:11.588691   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:11.607595   33771 logs.go:277] 0 containers: []
	W0223 15:00:11.607608   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:11.607677   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:11.626682   33771 logs.go:277] 0 containers: []
	W0223 15:00:11.626701   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:11.626780   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:11.645972   33771 logs.go:277] 0 containers: []
	W0223 15:00:11.645987   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:11.646055   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:11.665477   33771 logs.go:277] 0 containers: []
	W0223 15:00:11.665492   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:11.665567   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:11.685423   33771 logs.go:277] 0 containers: []
	W0223 15:00:11.685435   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:11.685509   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:11.706068   33771 logs.go:277] 0 containers: []
	W0223 15:00:11.706081   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:11.706152   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:11.725485   33771 logs.go:277] 0 containers: []
	W0223 15:00:11.725500   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:11.725508   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:11.725515   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:11.737282   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:11.737296   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:11.791006   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:11.791017   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:11.791028   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:11.811772   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:11.811785   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:13.857718   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045856312s)
	I0223 15:00:13.857853   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:13.857864   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:16.396267   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:16.569069   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:16.589218   33771 logs.go:277] 0 containers: []
	W0223 15:00:16.589234   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:16.589314   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:16.610286   33771 logs.go:277] 0 containers: []
	W0223 15:00:16.610300   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:16.610384   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:16.630629   33771 logs.go:277] 0 containers: []
	W0223 15:00:16.630644   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:16.630717   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:16.649498   33771 logs.go:277] 0 containers: []
	W0223 15:00:16.649513   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:16.649584   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:16.668294   33771 logs.go:277] 0 containers: []
	W0223 15:00:16.668309   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:16.668386   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:16.687627   33771 logs.go:277] 0 containers: []
	W0223 15:00:16.687640   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:16.687711   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:16.705904   33771 logs.go:277] 0 containers: []
	W0223 15:00:16.705920   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:16.705994   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:16.725529   33771 logs.go:277] 0 containers: []
	W0223 15:00:16.725547   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:16.725555   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:16.725562   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:16.749371   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:16.749396   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:18.801623   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052146137s)
	I0223 15:00:18.801735   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:18.801747   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:18.840054   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:18.840067   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:18.852155   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:18.852176   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:18.905987   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:21.406706   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:21.568898   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:21.589877   33771 logs.go:277] 0 containers: []
	W0223 15:00:21.589891   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:21.589972   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:21.609799   33771 logs.go:277] 0 containers: []
	W0223 15:00:21.609814   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:21.609917   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:21.628820   33771 logs.go:277] 0 containers: []
	W0223 15:00:21.628835   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:21.628903   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:21.647754   33771 logs.go:277] 0 containers: []
	W0223 15:00:21.647768   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:21.647840   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:21.666774   33771 logs.go:277] 0 containers: []
	W0223 15:00:21.666787   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:21.666863   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:21.686279   33771 logs.go:277] 0 containers: []
	W0223 15:00:21.686291   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:21.686360   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:21.705428   33771 logs.go:277] 0 containers: []
	W0223 15:00:21.705441   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:21.705507   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:21.725707   33771 logs.go:277] 0 containers: []
	W0223 15:00:21.725723   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:21.725731   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:21.725748   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:21.747265   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:21.747283   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:23.789592   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.042237681s)
	I0223 15:00:23.789708   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:23.789716   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:23.826813   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:23.826827   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:23.838600   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:23.838614   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:23.892751   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:26.393545   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:26.568882   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:26.589911   33771 logs.go:277] 0 containers: []
	W0223 15:00:26.589926   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:26.589997   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:26.609706   33771 logs.go:277] 0 containers: []
	W0223 15:00:26.609719   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:26.609788   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:26.629463   33771 logs.go:277] 0 containers: []
	W0223 15:00:26.629477   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:26.629546   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:26.648092   33771 logs.go:277] 0 containers: []
	W0223 15:00:26.648105   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:26.648185   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:26.667351   33771 logs.go:277] 0 containers: []
	W0223 15:00:26.667364   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:26.667439   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:26.686336   33771 logs.go:277] 0 containers: []
	W0223 15:00:26.686357   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:26.686441   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:26.705994   33771 logs.go:277] 0 containers: []
	W0223 15:00:26.706008   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:26.706079   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:26.725410   33771 logs.go:277] 0 containers: []
	W0223 15:00:26.725423   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:26.725431   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:26.725442   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:26.765360   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:26.765378   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:26.777769   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:26.777785   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:26.833053   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:26.833064   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:26.833077   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:26.854198   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:26.854212   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:28.898529   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044246129s)
	I0223 15:00:31.399140   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:31.569031   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:31.589952   33771 logs.go:277] 0 containers: []
	W0223 15:00:31.589965   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:31.590026   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:31.609248   33771 logs.go:277] 0 containers: []
	W0223 15:00:31.609262   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:31.609352   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:31.628214   33771 logs.go:277] 0 containers: []
	W0223 15:00:31.628228   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:31.628299   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:31.647118   33771 logs.go:277] 0 containers: []
	W0223 15:00:31.647131   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:31.647199   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:31.665421   33771 logs.go:277] 0 containers: []
	W0223 15:00:31.665435   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:31.665505   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:31.684420   33771 logs.go:277] 0 containers: []
	W0223 15:00:31.684432   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:31.684499   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:31.703109   33771 logs.go:277] 0 containers: []
	W0223 15:00:31.703123   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:31.703193   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:31.722893   33771 logs.go:277] 0 containers: []
	W0223 15:00:31.722911   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:31.722918   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:31.722925   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:31.761278   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:31.761293   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:31.773335   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:31.773348   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:31.827281   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:31.827297   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:31.827307   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:31.849100   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:31.849116   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:33.894427   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04524077s)
	I0223 15:00:36.395712   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:36.569225   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:36.591101   33771 logs.go:277] 0 containers: []
	W0223 15:00:36.591115   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:36.591196   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:36.610048   33771 logs.go:277] 0 containers: []
	W0223 15:00:36.610062   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:36.610133   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:36.628652   33771 logs.go:277] 0 containers: []
	W0223 15:00:36.628666   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:36.628736   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:36.648468   33771 logs.go:277] 0 containers: []
	W0223 15:00:36.648483   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:36.648561   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:36.668262   33771 logs.go:277] 0 containers: []
	W0223 15:00:36.668277   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:36.668346   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:36.686886   33771 logs.go:277] 0 containers: []
	W0223 15:00:36.686900   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:36.686969   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:36.705316   33771 logs.go:277] 0 containers: []
	W0223 15:00:36.705329   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:36.705399   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:36.724844   33771 logs.go:277] 0 containers: []
	W0223 15:00:36.724858   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:36.724865   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:36.724872   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:36.762705   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:36.762724   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:36.774984   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:36.775003   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:36.836227   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:36.836238   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:36.836246   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:36.857377   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:36.857390   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:38.903403   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045941763s)
	I0223 15:00:41.405819   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:41.571368   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:41.593412   33771 logs.go:277] 0 containers: []
	W0223 15:00:41.593425   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:41.593496   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:41.612987   33771 logs.go:277] 0 containers: []
	W0223 15:00:41.613000   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:41.613069   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:41.632990   33771 logs.go:277] 0 containers: []
	W0223 15:00:41.633003   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:41.633075   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:41.652752   33771 logs.go:277] 0 containers: []
	W0223 15:00:41.652765   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:41.652840   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:41.671411   33771 logs.go:277] 0 containers: []
	W0223 15:00:41.671425   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:41.671493   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:41.691515   33771 logs.go:277] 0 containers: []
	W0223 15:00:41.691528   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:41.691610   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:41.710670   33771 logs.go:277] 0 containers: []
	W0223 15:00:41.710684   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:41.710753   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:41.729778   33771 logs.go:277] 0 containers: []
	W0223 15:00:41.729791   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:41.729798   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:41.729813   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:41.751576   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:41.751591   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:43.795760   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044097962s)
	I0223 15:00:43.795874   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:43.795883   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:43.832860   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:43.832875   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:43.844641   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:43.844658   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:43.898250   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:46.400583   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:46.569473   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:46.590465   33771 logs.go:277] 0 containers: []
	W0223 15:00:46.590481   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:46.590551   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:46.609699   33771 logs.go:277] 0 containers: []
	W0223 15:00:46.609715   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:46.609793   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:46.629667   33771 logs.go:277] 0 containers: []
	W0223 15:00:46.629679   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:46.629755   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:46.649957   33771 logs.go:277] 0 containers: []
	W0223 15:00:46.649972   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:46.650041   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:46.669667   33771 logs.go:277] 0 containers: []
	W0223 15:00:46.669680   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:46.669749   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:46.688867   33771 logs.go:277] 0 containers: []
	W0223 15:00:46.688881   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:46.688956   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:46.708182   33771 logs.go:277] 0 containers: []
	W0223 15:00:46.708197   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:46.708267   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:46.729321   33771 logs.go:277] 0 containers: []
	W0223 15:00:46.729335   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:46.729342   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:46.729349   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:46.767748   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:46.767766   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:46.780117   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:46.780131   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:46.836620   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:46.836631   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:46.836638   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:46.857481   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:46.857498   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:48.904825   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047256048s)
	I0223 15:00:51.406071   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:51.570249   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:51.591749   33771 logs.go:277] 0 containers: []
	W0223 15:00:51.591764   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:51.591835   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:51.611939   33771 logs.go:277] 0 containers: []
	W0223 15:00:51.611952   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:51.612023   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:51.631167   33771 logs.go:277] 0 containers: []
	W0223 15:00:51.631185   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:51.631269   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:51.651246   33771 logs.go:277] 0 containers: []
	W0223 15:00:51.651259   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:51.651330   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:51.670457   33771 logs.go:277] 0 containers: []
	W0223 15:00:51.670471   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:51.670538   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:51.691044   33771 logs.go:277] 0 containers: []
	W0223 15:00:51.691057   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:51.691128   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:51.711255   33771 logs.go:277] 0 containers: []
	W0223 15:00:51.711270   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:51.711348   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:51.732333   33771 logs.go:277] 0 containers: []
	W0223 15:00:51.732349   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:51.732357   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:51.732365   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:00:51.744704   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:51.744720   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:51.827237   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:51.827248   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:51.827259   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:51.848689   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:51.848705   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:53.893008   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044231246s)
	I0223 15:00:53.893122   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:53.893131   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:56.431589   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:00:56.571860   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:00:56.593360   33771 logs.go:277] 0 containers: []
	W0223 15:00:56.593373   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:00:56.593443   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:00:56.612089   33771 logs.go:277] 0 containers: []
	W0223 15:00:56.612102   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:00:56.612172   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:00:56.631856   33771 logs.go:277] 0 containers: []
	W0223 15:00:56.631870   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:00:56.631943   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:00:56.651179   33771 logs.go:277] 0 containers: []
	W0223 15:00:56.651195   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:00:56.651265   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:00:56.670402   33771 logs.go:277] 0 containers: []
	W0223 15:00:56.670415   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:00:56.670484   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:00:56.689949   33771 logs.go:277] 0 containers: []
	W0223 15:00:56.689964   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:00:56.690037   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:00:56.710085   33771 logs.go:277] 0 containers: []
	W0223 15:00:56.710097   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:00:56.710167   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:00:56.730026   33771 logs.go:277] 0 containers: []
	W0223 15:00:56.730041   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:00:56.730050   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:00:56.730058   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:00:56.785842   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:00:56.785854   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:00:56.785862   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:00:56.806968   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:00:56.806983   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:00:58.852475   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045420693s)
	I0223 15:00:58.852582   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:00:58.852589   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:00:58.890107   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:00:58.890125   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:01.404313   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:01.570362   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:01.590846   33771 logs.go:277] 0 containers: []
	W0223 15:01:01.590859   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:01.590931   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:01.609940   33771 logs.go:277] 0 containers: []
	W0223 15:01:01.609953   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:01.610022   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:01.629081   33771 logs.go:277] 0 containers: []
	W0223 15:01:01.629095   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:01.629168   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:01.647880   33771 logs.go:277] 0 containers: []
	W0223 15:01:01.647895   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:01.647966   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:01.667414   33771 logs.go:277] 0 containers: []
	W0223 15:01:01.667428   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:01.667500   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:01.686856   33771 logs.go:277] 0 containers: []
	W0223 15:01:01.686869   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:01.686938   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:01.705434   33771 logs.go:277] 0 containers: []
	W0223 15:01:01.705447   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:01.705519   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:01.724391   33771 logs.go:277] 0 containers: []
	W0223 15:01:01.724406   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:01.724413   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:01.724431   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:01.745307   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:01.745322   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:03.789967   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044571514s)
	I0223 15:01:03.790080   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:03.790087   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:03.828626   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:03.828643   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:03.840874   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:03.840891   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:03.896590   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:06.397699   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:06.570272   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:06.591167   33771 logs.go:277] 0 containers: []
	W0223 15:01:06.591181   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:06.591277   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:06.611266   33771 logs.go:277] 0 containers: []
	W0223 15:01:06.611285   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:06.611368   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:06.630379   33771 logs.go:277] 0 containers: []
	W0223 15:01:06.630395   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:06.630465   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:06.648575   33771 logs.go:277] 0 containers: []
	W0223 15:01:06.648590   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:06.648664   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:06.667199   33771 logs.go:277] 0 containers: []
	W0223 15:01:06.667226   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:06.667340   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:06.687717   33771 logs.go:277] 0 containers: []
	W0223 15:01:06.687731   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:06.687805   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:06.707898   33771 logs.go:277] 0 containers: []
	W0223 15:01:06.707912   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:06.707988   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:06.727737   33771 logs.go:277] 0 containers: []
	W0223 15:01:06.727752   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:06.727760   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:06.727767   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:06.788442   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:06.788455   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:06.788462   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:06.830184   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:06.830203   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:08.877542   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047267511s)
	I0223 15:01:08.877649   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:08.877658   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:08.915592   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:08.915607   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:11.428270   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:11.572289   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:11.593183   33771 logs.go:277] 0 containers: []
	W0223 15:01:11.593198   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:11.593269   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:11.612363   33771 logs.go:277] 0 containers: []
	W0223 15:01:11.612378   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:11.612453   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:11.632493   33771 logs.go:277] 0 containers: []
	W0223 15:01:11.632506   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:11.632572   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:11.652009   33771 logs.go:277] 0 containers: []
	W0223 15:01:11.652021   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:11.652087   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:11.670375   33771 logs.go:277] 0 containers: []
	W0223 15:01:11.670388   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:11.670472   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:11.689353   33771 logs.go:277] 0 containers: []
	W0223 15:01:11.689368   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:11.689439   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:11.708371   33771 logs.go:277] 0 containers: []
	W0223 15:01:11.708384   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:11.708467   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:11.728033   33771 logs.go:277] 0 containers: []
	W0223 15:01:11.728048   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:11.728056   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:11.728072   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:11.748793   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:11.748805   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:13.794313   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045437137s)
	I0223 15:01:13.794431   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:13.794439   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:13.832521   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:13.832536   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:13.844419   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:13.844434   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:13.899431   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:16.401697   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:16.571934   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:16.592534   33771 logs.go:277] 0 containers: []
	W0223 15:01:16.592549   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:16.592628   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:16.611962   33771 logs.go:277] 0 containers: []
	W0223 15:01:16.611976   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:16.612046   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:16.630991   33771 logs.go:277] 0 containers: []
	W0223 15:01:16.631005   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:16.631076   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:16.650816   33771 logs.go:277] 0 containers: []
	W0223 15:01:16.650830   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:16.650899   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:16.669706   33771 logs.go:277] 0 containers: []
	W0223 15:01:16.669720   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:16.669795   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:16.688720   33771 logs.go:277] 0 containers: []
	W0223 15:01:16.688737   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:16.688808   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:16.708496   33771 logs.go:277] 0 containers: []
	W0223 15:01:16.708509   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:16.708583   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:16.729390   33771 logs.go:277] 0 containers: []
	W0223 15:01:16.729404   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:16.729412   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:16.729420   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:16.768868   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:16.768887   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:16.781276   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:16.781292   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:16.835429   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:16.835444   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:16.835451   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:16.856362   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:16.856376   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:18.902472   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046024325s)
	I0223 15:01:21.403033   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:21.571905   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:21.592595   33771 logs.go:277] 0 containers: []
	W0223 15:01:21.592610   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:21.592679   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:21.612198   33771 logs.go:277] 0 containers: []
	W0223 15:01:21.612212   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:21.612281   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:21.631329   33771 logs.go:277] 0 containers: []
	W0223 15:01:21.631342   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:21.631413   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:21.650205   33771 logs.go:277] 0 containers: []
	W0223 15:01:21.650220   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:21.650289   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:21.669598   33771 logs.go:277] 0 containers: []
	W0223 15:01:21.669612   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:21.669684   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:21.688371   33771 logs.go:277] 0 containers: []
	W0223 15:01:21.688385   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:21.688457   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:21.708107   33771 logs.go:277] 0 containers: []
	W0223 15:01:21.708120   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:21.708189   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:21.727629   33771 logs.go:277] 0 containers: []
	W0223 15:01:21.727643   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:21.727651   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:21.727659   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:21.739545   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:21.739561   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:21.825486   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:21.825496   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:21.825503   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:21.846764   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:21.846781   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:23.891715   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044863471s)
	I0223 15:01:23.891826   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:23.891833   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:26.431271   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:26.570799   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:26.592650   33771 logs.go:277] 0 containers: []
	W0223 15:01:26.592665   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:26.592747   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:26.612605   33771 logs.go:277] 0 containers: []
	W0223 15:01:26.612619   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:26.612689   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:26.630771   33771 logs.go:277] 0 containers: []
	W0223 15:01:26.630786   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:26.630856   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:26.649469   33771 logs.go:277] 0 containers: []
	W0223 15:01:26.649483   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:26.649555   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:26.669247   33771 logs.go:277] 0 containers: []
	W0223 15:01:26.669261   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:26.669333   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:26.688607   33771 logs.go:277] 0 containers: []
	W0223 15:01:26.688620   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:26.688689   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:26.708774   33771 logs.go:277] 0 containers: []
	W0223 15:01:26.708788   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:26.708856   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:26.728261   33771 logs.go:277] 0 containers: []
	W0223 15:01:26.728273   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:26.728280   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:26.728290   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:26.765838   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:26.765857   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:26.778143   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:26.778156   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:26.832960   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:26.832983   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:26.832991   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:26.853627   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:26.853642   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:28.899691   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045978535s)
	I0223 15:01:31.402114   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:31.571199   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:31.591086   33771 logs.go:277] 0 containers: []
	W0223 15:01:31.591100   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:31.591170   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:31.610149   33771 logs.go:277] 0 containers: []
	W0223 15:01:31.610163   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:31.610231   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:31.628906   33771 logs.go:277] 0 containers: []
	W0223 15:01:31.628919   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:31.628993   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:31.648448   33771 logs.go:277] 0 containers: []
	W0223 15:01:31.648462   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:31.648533   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:31.666704   33771 logs.go:277] 0 containers: []
	W0223 15:01:31.666717   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:31.666786   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:31.685400   33771 logs.go:277] 0 containers: []
	W0223 15:01:31.685414   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:31.685483   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:31.704507   33771 logs.go:277] 0 containers: []
	W0223 15:01:31.704521   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:31.704589   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:31.723585   33771 logs.go:277] 0 containers: []
	W0223 15:01:31.723600   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:31.723607   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:31.723615   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:31.762602   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:31.762621   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:31.775006   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:31.775020   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:31.830131   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:31.830143   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:31.830151   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:31.851356   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:31.851370   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:33.895747   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044306741s)
	I0223 15:01:36.397605   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:36.572000   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:36.593094   33771 logs.go:277] 0 containers: []
	W0223 15:01:36.593107   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:36.593178   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:36.612328   33771 logs.go:277] 0 containers: []
	W0223 15:01:36.612343   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:36.612413   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:36.633311   33771 logs.go:277] 0 containers: []
	W0223 15:01:36.633326   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:36.633397   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:36.653596   33771 logs.go:277] 0 containers: []
	W0223 15:01:36.653611   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:36.653680   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:36.671638   33771 logs.go:277] 0 containers: []
	W0223 15:01:36.671651   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:36.671720   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:36.690626   33771 logs.go:277] 0 containers: []
	W0223 15:01:36.690640   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:36.690712   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:36.713111   33771 logs.go:277] 0 containers: []
	W0223 15:01:36.713131   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:36.713223   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:36.732339   33771 logs.go:277] 0 containers: []
	W0223 15:01:36.732353   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:36.732360   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:36.732370   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:38.780181   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047739812s)
	I0223 15:01:38.780302   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:38.780313   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:38.817249   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:38.817263   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:38.829611   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:38.829625   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:38.882924   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:38.882936   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:38.882943   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:41.404519   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:41.571934   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:41.593164   33771 logs.go:277] 0 containers: []
	W0223 15:01:41.593178   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:41.593252   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:41.612505   33771 logs.go:277] 0 containers: []
	W0223 15:01:41.612521   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:41.612612   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:41.631218   33771 logs.go:277] 0 containers: []
	W0223 15:01:41.631232   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:41.631302   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:41.651357   33771 logs.go:277] 0 containers: []
	W0223 15:01:41.651370   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:41.651442   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:41.670197   33771 logs.go:277] 0 containers: []
	W0223 15:01:41.670210   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:41.670299   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:41.689049   33771 logs.go:277] 0 containers: []
	W0223 15:01:41.689061   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:41.689130   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:41.707937   33771 logs.go:277] 0 containers: []
	W0223 15:01:41.707951   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:41.708020   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:41.727008   33771 logs.go:277] 0 containers: []
	W0223 15:01:41.727022   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:41.727030   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:41.727037   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:41.763959   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:41.763973   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:41.776071   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:41.776084   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:41.831275   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:41.831286   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:41.831293   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:41.852017   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:41.852033   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:43.898401   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046294291s)
	I0223 15:01:46.398692   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:46.571853   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:46.592687   33771 logs.go:277] 0 containers: []
	W0223 15:01:46.592702   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:46.592771   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:46.612077   33771 logs.go:277] 0 containers: []
	W0223 15:01:46.612089   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:46.612159   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:46.631338   33771 logs.go:277] 0 containers: []
	W0223 15:01:46.631352   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:46.631427   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:46.651728   33771 logs.go:277] 0 containers: []
	W0223 15:01:46.651742   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:46.651811   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:46.670177   33771 logs.go:277] 0 containers: []
	W0223 15:01:46.670189   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:46.670261   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:46.689168   33771 logs.go:277] 0 containers: []
	W0223 15:01:46.689181   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:46.689253   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:46.709267   33771 logs.go:277] 0 containers: []
	W0223 15:01:46.709281   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:46.709350   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:46.729529   33771 logs.go:277] 0 containers: []
	W0223 15:01:46.729546   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:46.729554   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:46.729563   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:46.742916   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:46.742933   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:46.798268   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:46.798280   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:46.798289   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:46.819009   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:46.819025   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:48.863585   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044489289s)
	I0223 15:01:48.863693   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:48.863700   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:51.401908   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:51.571399   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:51.591563   33771 logs.go:277] 0 containers: []
	W0223 15:01:51.591577   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:51.591650   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:51.610673   33771 logs.go:277] 0 containers: []
	W0223 15:01:51.610686   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:51.610757   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:51.630080   33771 logs.go:277] 0 containers: []
	W0223 15:01:51.630095   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:51.630164   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:51.650478   33771 logs.go:277] 0 containers: []
	W0223 15:01:51.650491   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:51.650558   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:51.669915   33771 logs.go:277] 0 containers: []
	W0223 15:01:51.669929   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:51.670000   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:51.689345   33771 logs.go:277] 0 containers: []
	W0223 15:01:51.689358   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:51.689426   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:51.709002   33771 logs.go:277] 0 containers: []
	W0223 15:01:51.709019   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:51.709091   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:51.728708   33771 logs.go:277] 0 containers: []
	W0223 15:01:51.728724   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:51.728732   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:51.728741   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:51.768484   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:51.768504   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:51.780712   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:51.780731   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:51.851500   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:51.851511   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:51.851519   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:51.872800   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:51.872815   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:53.917194   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04430831s)
	I0223 15:01:56.417563   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:01:56.573446   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:01:56.595095   33771 logs.go:277] 0 containers: []
	W0223 15:01:56.595110   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:01:56.595179   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:01:56.614330   33771 logs.go:277] 0 containers: []
	W0223 15:01:56.614343   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:01:56.614413   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:01:56.634352   33771 logs.go:277] 0 containers: []
	W0223 15:01:56.634367   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:01:56.634435   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:01:56.653722   33771 logs.go:277] 0 containers: []
	W0223 15:01:56.653736   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:01:56.653818   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:01:56.673607   33771 logs.go:277] 0 containers: []
	W0223 15:01:56.673621   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:01:56.673691   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:01:56.693268   33771 logs.go:277] 0 containers: []
	W0223 15:01:56.693280   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:01:56.693347   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:01:56.713393   33771 logs.go:277] 0 containers: []
	W0223 15:01:56.713406   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:01:56.713476   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:01:56.732675   33771 logs.go:277] 0 containers: []
	W0223 15:01:56.732689   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:01:56.732697   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:01:56.732704   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:01:56.769742   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:01:56.769759   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:01:56.781905   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:01:56.781919   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:01:56.836814   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:01:56.836826   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:01:56.836833   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:01:56.857789   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:01:56.857806   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:01:58.903307   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045429829s)
	I0223 15:02:01.405339   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:01.571687   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:01.591909   33771 logs.go:277] 0 containers: []
	W0223 15:02:01.591924   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:01.591995   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:01.611090   33771 logs.go:277] 0 containers: []
	W0223 15:02:01.611104   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:01.611172   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:01.630644   33771 logs.go:277] 0 containers: []
	W0223 15:02:01.630658   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:01.630728   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:01.649469   33771 logs.go:277] 0 containers: []
	W0223 15:02:01.649483   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:01.649554   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:01.668589   33771 logs.go:277] 0 containers: []
	W0223 15:02:01.668602   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:01.668671   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:01.688268   33771 logs.go:277] 0 containers: []
	W0223 15:02:01.688282   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:01.688354   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:01.707721   33771 logs.go:277] 0 containers: []
	W0223 15:02:01.707734   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:01.707805   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:01.727978   33771 logs.go:277] 0 containers: []
	W0223 15:02:01.727993   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:01.728001   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:01.728009   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:01.765849   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:01.765867   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:01.777829   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:01.777844   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:01.832109   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:01.832119   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:01.832126   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:01.852735   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:01.852749   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:03.895872   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043052116s)
	I0223 15:02:06.398311   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:06.572000   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:06.593003   33771 logs.go:277] 0 containers: []
	W0223 15:02:06.593018   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:06.593092   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:06.611951   33771 logs.go:277] 0 containers: []
	W0223 15:02:06.611964   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:06.612033   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:06.631553   33771 logs.go:277] 0 containers: []
	W0223 15:02:06.631566   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:06.631632   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:06.651625   33771 logs.go:277] 0 containers: []
	W0223 15:02:06.651638   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:06.651709   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:06.669920   33771 logs.go:277] 0 containers: []
	W0223 15:02:06.669934   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:06.670006   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:06.688408   33771 logs.go:277] 0 containers: []
	W0223 15:02:06.688425   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:06.688497   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:06.706980   33771 logs.go:277] 0 containers: []
	W0223 15:02:06.706994   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:06.707074   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:06.726832   33771 logs.go:277] 0 containers: []
	W0223 15:02:06.726846   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:06.726855   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:06.726862   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:08.774648   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047714881s)
	I0223 15:02:08.774758   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:08.774766   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:08.811527   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:08.811542   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:08.824841   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:08.824859   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:08.878583   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:08.878596   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:08.878603   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:11.400758   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:11.571961   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:11.593482   33771 logs.go:277] 0 containers: []
	W0223 15:02:11.593494   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:11.593569   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:11.612116   33771 logs.go:277] 0 containers: []
	W0223 15:02:11.612129   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:11.612198   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:11.631188   33771 logs.go:277] 0 containers: []
	W0223 15:02:11.631204   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:11.631298   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:11.650650   33771 logs.go:277] 0 containers: []
	W0223 15:02:11.650663   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:11.650734   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:11.670199   33771 logs.go:277] 0 containers: []
	W0223 15:02:11.670219   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:11.670321   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:11.691016   33771 logs.go:277] 0 containers: []
	W0223 15:02:11.691030   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:11.691100   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:11.710362   33771 logs.go:277] 0 containers: []
	W0223 15:02:11.710375   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:11.710445   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:11.730358   33771 logs.go:277] 0 containers: []
	W0223 15:02:11.730372   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:11.730379   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:11.730386   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:13.775135   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044679327s)
	I0223 15:02:13.775241   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:13.775250   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:13.814273   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:13.814289   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:13.826865   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:13.826882   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:13.881512   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:13.881523   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:13.881531   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:16.404167   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:16.572576   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:16.592437   33771 logs.go:277] 0 containers: []
	W0223 15:02:16.592450   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:16.592523   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:16.612438   33771 logs.go:277] 0 containers: []
	W0223 15:02:16.612453   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:16.612521   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:16.631838   33771 logs.go:277] 0 containers: []
	W0223 15:02:16.631853   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:16.631927   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:16.651225   33771 logs.go:277] 0 containers: []
	W0223 15:02:16.651239   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:16.651310   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:16.670859   33771 logs.go:277] 0 containers: []
	W0223 15:02:16.670873   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:16.670942   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:16.689953   33771 logs.go:277] 0 containers: []
	W0223 15:02:16.689967   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:16.690036   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:16.709292   33771 logs.go:277] 0 containers: []
	W0223 15:02:16.709305   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:16.709384   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:16.729609   33771 logs.go:277] 0 containers: []
	W0223 15:02:16.729623   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:16.729630   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:16.729638   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:16.767513   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:16.767532   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:16.779811   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:16.779827   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:16.833124   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:16.833135   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:16.833142   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:16.854027   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:16.854041   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:18.896986   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.042873773s)
	I0223 15:02:21.397362   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:21.572128   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:21.592369   33771 logs.go:277] 0 containers: []
	W0223 15:02:21.592383   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:21.592455   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:21.612278   33771 logs.go:277] 0 containers: []
	W0223 15:02:21.612293   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:21.612352   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:21.631971   33771 logs.go:277] 0 containers: []
	W0223 15:02:21.631985   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:21.632060   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:21.650737   33771 logs.go:277] 0 containers: []
	W0223 15:02:21.650752   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:21.650823   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:21.669824   33771 logs.go:277] 0 containers: []
	W0223 15:02:21.669837   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:21.669902   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:21.689484   33771 logs.go:277] 0 containers: []
	W0223 15:02:21.689502   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:21.689597   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:21.717094   33771 logs.go:277] 0 containers: []
	W0223 15:02:21.717109   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:21.717182   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:21.739298   33771 logs.go:277] 0 containers: []
	W0223 15:02:21.739314   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:21.739323   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:21.739339   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:23.784765   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045350324s)
	I0223 15:02:23.784884   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:23.784891   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:23.821871   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:23.821886   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:23.833958   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:23.833971   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:23.888042   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:23.888056   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:23.888063   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:26.409776   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:26.574448   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:26.595497   33771 logs.go:277] 0 containers: []
	W0223 15:02:26.595511   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:26.595581   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:26.614749   33771 logs.go:277] 0 containers: []
	W0223 15:02:26.614764   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:26.614834   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:26.633552   33771 logs.go:277] 0 containers: []
	W0223 15:02:26.633567   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:26.633643   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:26.652938   33771 logs.go:277] 0 containers: []
	W0223 15:02:26.652950   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:26.653016   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:26.672022   33771 logs.go:277] 0 containers: []
	W0223 15:02:26.672035   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:26.672105   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:26.692494   33771 logs.go:277] 0 containers: []
	W0223 15:02:26.692508   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:26.692577   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:26.711731   33771 logs.go:277] 0 containers: []
	W0223 15:02:26.711745   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:26.711816   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:26.730508   33771 logs.go:277] 0 containers: []
	W0223 15:02:26.730521   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:26.730528   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:26.730535   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:26.767949   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:26.767967   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:26.780381   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:26.780395   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:26.834867   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:26.834880   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:26.834888   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:26.856195   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:26.856210   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:28.900923   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04464149s)
	I0223 15:02:31.402271   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:31.573269   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:31.594208   33771 logs.go:277] 0 containers: []
	W0223 15:02:31.594221   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:31.594290   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:31.613883   33771 logs.go:277] 0 containers: []
	W0223 15:02:31.613897   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:31.613968   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:31.632688   33771 logs.go:277] 0 containers: []
	W0223 15:02:31.632701   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:31.632778   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:31.651490   33771 logs.go:277] 0 containers: []
	W0223 15:02:31.651504   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:31.651575   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:31.670295   33771 logs.go:277] 0 containers: []
	W0223 15:02:31.670309   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:31.670382   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:31.689545   33771 logs.go:277] 0 containers: []
	W0223 15:02:31.689560   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:31.689631   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:31.708903   33771 logs.go:277] 0 containers: []
	W0223 15:02:31.708916   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:31.708986   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:31.727206   33771 logs.go:277] 0 containers: []
	W0223 15:02:31.727220   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:31.727227   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:31.727236   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:33.772325   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045017459s)
	I0223 15:02:33.772449   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:33.772458   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:33.809888   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:33.809910   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:33.822393   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:33.822409   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:33.877574   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:33.877587   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:33.877594   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:36.398871   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:36.574683   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:36.596468   33771 logs.go:277] 0 containers: []
	W0223 15:02:36.596483   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:36.596554   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:36.616538   33771 logs.go:277] 0 containers: []
	W0223 15:02:36.616553   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:36.616623   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:36.636963   33771 logs.go:277] 0 containers: []
	W0223 15:02:36.636976   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:36.637049   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:36.656259   33771 logs.go:277] 0 containers: []
	W0223 15:02:36.656273   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:36.656343   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:36.675013   33771 logs.go:277] 0 containers: []
	W0223 15:02:36.675029   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:36.675104   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:36.693831   33771 logs.go:277] 0 containers: []
	W0223 15:02:36.693846   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:36.693916   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:36.714202   33771 logs.go:277] 0 containers: []
	W0223 15:02:36.714217   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:36.714292   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:36.734266   33771 logs.go:277] 0 containers: []
	W0223 15:02:36.734280   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:36.734289   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:36.734297   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:36.773306   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:36.773327   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:36.785970   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:36.786002   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:36.845138   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:36.845150   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:36.845159   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:36.865845   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:36.865860   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:38.910113   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044182253s)
	I0223 15:02:41.410862   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:41.573521   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:02:41.594707   33771 logs.go:277] 0 containers: []
	W0223 15:02:41.594719   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:02:41.594789   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:02:41.613916   33771 logs.go:277] 0 containers: []
	W0223 15:02:41.613930   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:02:41.613998   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:02:41.634104   33771 logs.go:277] 0 containers: []
	W0223 15:02:41.634117   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:02:41.634189   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:02:41.653179   33771 logs.go:277] 0 containers: []
	W0223 15:02:41.653193   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:02:41.653263   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:02:41.672389   33771 logs.go:277] 0 containers: []
	W0223 15:02:41.672402   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:02:41.672474   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:02:41.692433   33771 logs.go:277] 0 containers: []
	W0223 15:02:41.692447   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:02:41.692537   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:02:41.711302   33771 logs.go:277] 0 containers: []
	W0223 15:02:41.711316   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:02:41.711388   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:02:41.731152   33771 logs.go:277] 0 containers: []
	W0223 15:02:41.731167   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:02:41.731175   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:02:41.731182   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:02:41.768542   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:02:41.768556   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:02:41.780233   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:02:41.780246   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:02:41.834832   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 15:02:41.834843   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:02:41.834850   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:02:41.855682   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:02:41.855696   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:02:43.899860   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044093343s)
	I0223 15:02:46.401651   33771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:02:46.573679   33771 kubeadm.go:637] restartCluster took 4m11.321573824s
	W0223 15:02:46.573804   33771 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0223 15:02:46.573834   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 15:02:46.986097   33771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 15:02:46.996098   33771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 15:02:47.004019   33771 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 15:02:47.004071   33771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 15:02:47.011618   33771 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 15:02:47.011647   33771 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 15:02:47.061055   33771 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 15:02:47.061113   33771 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 15:02:47.226977   33771 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 15:02:47.227066   33771 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 15:02:47.227142   33771 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 15:02:47.384507   33771 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 15:02:47.385298   33771 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 15:02:47.391855   33771 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 15:02:47.466498   33771 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 15:02:47.488150   33771 out.go:204]   - Generating certificates and keys ...
	I0223 15:02:47.488235   33771 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 15:02:47.488328   33771 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 15:02:47.488413   33771 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 15:02:47.488486   33771 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 15:02:47.488556   33771 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 15:02:47.488604   33771 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 15:02:47.488664   33771 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 15:02:47.488714   33771 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 15:02:47.488794   33771 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 15:02:47.488889   33771 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 15:02:47.488937   33771 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 15:02:47.488987   33771 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 15:02:47.616728   33771 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 15:02:47.701356   33771 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 15:02:47.873115   33771 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 15:02:47.962620   33771 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 15:02:47.963133   33771 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 15:02:47.984779   33771 out.go:204]   - Booting up control plane ...
	I0223 15:02:47.984879   33771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 15:02:47.984973   33771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 15:02:47.985031   33771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 15:02:47.985115   33771 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 15:02:47.985251   33771 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 15:03:27.973309   33771 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 15:03:27.974184   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:03:27.974397   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:03:32.974519   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:03:32.974702   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:03:42.976324   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:03:42.976508   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:04:02.977767   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:04:02.977940   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:04:42.980013   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:04:42.980222   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:04:42.980237   33771 kubeadm.go:322] 
	I0223 15:04:42.980296   33771 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 15:04:42.980337   33771 kubeadm.go:322] 	timed out waiting for the condition
	I0223 15:04:42.980346   33771 kubeadm.go:322] 
	I0223 15:04:42.980397   33771 kubeadm.go:322] This error is likely caused by:
	I0223 15:04:42.980452   33771 kubeadm.go:322] 	- The kubelet is not running
	I0223 15:04:42.980588   33771 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 15:04:42.980605   33771 kubeadm.go:322] 
	I0223 15:04:42.980793   33771 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 15:04:42.980828   33771 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 15:04:42.980859   33771 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 15:04:42.980866   33771 kubeadm.go:322] 
	I0223 15:04:42.980981   33771 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 15:04:42.981104   33771 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 15:04:42.981182   33771 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 15:04:42.981260   33771 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 15:04:42.981388   33771 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 15:04:42.981424   33771 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 15:04:42.983966   33771 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 15:04:42.984056   33771 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 15:04:42.984172   33771 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 15:04:42.984267   33771 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 15:04:42.984336   33771 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 15:04:42.984392   33771 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 15:04:42.984525   33771 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 15:04:42.984558   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 15:04:43.393532   33771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 15:04:43.404100   33771 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 15:04:43.404169   33771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 15:04:43.412289   33771 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 15:04:43.412309   33771 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 15:04:43.461400   33771 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0223 15:04:43.461452   33771 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 15:04:43.627656   33771 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 15:04:43.627862   33771 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 15:04:43.628031   33771 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 15:04:43.785959   33771 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 15:04:43.788846   33771 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 15:04:43.796862   33771 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0223 15:04:43.865795   33771 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 15:04:43.889431   33771 out.go:204]   - Generating certificates and keys ...
	I0223 15:04:43.889510   33771 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 15:04:43.889571   33771 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 15:04:43.889688   33771 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 15:04:43.889756   33771 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 15:04:43.889825   33771 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 15:04:43.889891   33771 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 15:04:43.889963   33771 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 15:04:43.890022   33771 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 15:04:43.890090   33771 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 15:04:43.890161   33771 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 15:04:43.890194   33771 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 15:04:43.890246   33771 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 15:04:43.935813   33771 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 15:04:44.021063   33771 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 15:04:44.169211   33771 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 15:04:44.347688   33771 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 15:04:44.348389   33771 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 15:04:44.370360   33771 out.go:204]   - Booting up control plane ...
	I0223 15:04:44.370453   33771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 15:04:44.370516   33771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 15:04:44.370585   33771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 15:04:44.370706   33771 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 15:04:44.370853   33771 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 15:05:24.358321   33771 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 15:05:24.358782   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:05:24.358956   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:05:29.359960   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:05:29.360107   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:05:39.362445   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:05:39.362672   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:05:59.363493   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:05:59.363675   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:06:39.367106   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:06:39.367332   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:06:39.367346   33771 kubeadm.go:322] 
	I0223 15:06:39.367418   33771 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 15:06:39.367472   33771 kubeadm.go:322] 	timed out waiting for the condition
	I0223 15:06:39.367487   33771 kubeadm.go:322] 
	I0223 15:06:39.367527   33771 kubeadm.go:322] This error is likely caused by:
	I0223 15:06:39.367584   33771 kubeadm.go:322] 	- The kubelet is not running
	I0223 15:06:39.367694   33771 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 15:06:39.367703   33771 kubeadm.go:322] 
	I0223 15:06:39.367840   33771 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 15:06:39.367881   33771 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 15:06:39.367919   33771 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 15:06:39.367931   33771 kubeadm.go:322] 
	I0223 15:06:39.368053   33771 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 15:06:39.368126   33771 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 15:06:39.368200   33771 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 15:06:39.368239   33771 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 15:06:39.368301   33771 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 15:06:39.368333   33771 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 15:06:39.370619   33771 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 15:06:39.370685   33771 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 15:06:39.370785   33771 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 15:06:39.370874   33771 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 15:06:39.370958   33771 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 15:06:39.371017   33771 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 15:06:39.371035   33771 kubeadm.go:403] StartCluster complete in 8m4.140598534s
	I0223 15:06:39.371131   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:06:39.390321   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.390333   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:06:39.390403   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:06:39.410851   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.410865   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:06:39.410947   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:06:39.430090   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.430103   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:06:39.430177   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:06:39.449615   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.449630   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:06:39.449698   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:06:39.469693   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.469708   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:06:39.469779   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:06:39.488934   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.488950   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:06:39.489033   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:06:39.508436   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.508449   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:06:39.508518   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:06:39.534494   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.534507   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:06:39.534515   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:06:39.534523   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:06:39.556111   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:06:39.556132   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:06:41.602211   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046007391s)
	I0223 15:06:41.602325   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:06:41.602332   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:06:41.639676   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:06:41.639691   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:06:41.652035   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:06:41.652050   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:06:41.706272   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0223 15:06:41.706291   33771 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 15:06:41.706310   33771 out.go:239] * 
	* 
	W0223 15:06:41.706442   33771 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 15:06:41.706458   33771 out.go:239] * 
	* 
	W0223 15:06:41.707066   33771 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 15:06:41.771711   33771 out.go:177] 
	W0223 15:06:41.813793   33771 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 15:06:41.813856   33771 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 15:06:41.813887   33771 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 15:06:41.834757   33771 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-919000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-919000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f",
	        "Created": "2023-02-23T22:52:47.108009889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295370,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:58:31.462047896Z",
	            "FinishedAt": "2023-02-23T22:58:28.586447345Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hosts",
	        "LogPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f-json.log",
	        "Name": "/old-k8s-version-919000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-919000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-919000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-919000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-919000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-919000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7f0571f7cf360e3b17992c95713f7ea16dfa34d74d6177b2bc9da7d70e05cc8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62350"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62351"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62353"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62354"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e7f0571f7cf3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-919000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5b30451d4570",
	                        "old-k8s-version-919000"
	                    ],
	                    "NetworkID": "c7154bbdfe1ae896999b2fd2c462dec29ff61281e64aa32aac9e788f781af78c",
	                    "EndpointID": "39c19201631fccca791376922476d56d0e5ed13cd34e99506a6626afdd2a5781",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 2 (401.237573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-919000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-919000 logs -n 25: (3.394558345s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-452000 sudo                            | kubenet-452000               | jenkins | v1.29.0 | 23 Feb 23 14:53 PST |                     |
	|         | systemctl status crio --all                       |                              |         |         |                     |                     |
	|         | --full --no-pager                                 |                              |         |         |                     |                     |
	| ssh     | -p kubenet-452000 sudo                            | kubenet-452000               | jenkins | v1.29.0 | 23 Feb 23 14:53 PST | 23 Feb 23 14:53 PST |
	|         | systemctl cat crio --no-pager                     |                              |         |         |                     |                     |
	| ssh     | -p kubenet-452000 sudo find                       | kubenet-452000               | jenkins | v1.29.0 | 23 Feb 23 14:53 PST | 23 Feb 23 14:53 PST |
	|         | /etc/crio -type f -exec sh -c                     |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                              |         |         |                     |                     |
	| ssh     | -p kubenet-452000 sudo crio                       | kubenet-452000               | jenkins | v1.29.0 | 23 Feb 23 14:53 PST | 23 Feb 23 14:53 PST |
	|         | config                                            |                              |         |         |                     |                     |
	| delete  | -p kubenet-452000                                 | kubenet-452000               | jenkins | v1.29.0 | 23 Feb 23 14:53 PST | 23 Feb 23 14:53 PST |
	| delete  | -p                                                | disable-driver-mounts-500000 | jenkins | v1.29.0 | 23 Feb 23 14:53 PST | 23 Feb 23 14:53 PST |
	|         | disable-driver-mounts-500000                      |                              |         |         |                     |                     |
	| start   | -p no-preload-436000                              | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 14:53 PST | 23 Feb 23 14:54 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-436000        | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 14:54 PST | 23 Feb 23 14:54 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p no-preload-436000                              | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 14:54 PST | 23 Feb 23 14:54 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-436000             | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 14:54 PST | 23 Feb 23 14:54 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-436000                              | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 14:54 PST | 23 Feb 23 15:04 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919000   | old-k8s-version-919000       | jenkins | v1.29.0 | 23 Feb 23 14:56 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919000                         | old-k8s-version-919000       | jenkins | v1.29.0 | 23 Feb 23 14:58 PST | 23 Feb 23 14:58 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919000        | old-k8s-version-919000       | jenkins | v1.29.0 | 23 Feb 23 14:58 PST | 23 Feb 23 14:58 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919000                         | old-k8s-version-919000       | jenkins | v1.29.0 | 23 Feb 23 14:58 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-436000 sudo                         | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 15:04 PST | 23 Feb 23 15:04 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-436000                              | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 15:04 PST | 23 Feb 23 15:04 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-436000                              | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 15:04 PST | 23 Feb 23 15:04 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-436000                              | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 15:04 PST | 23 Feb 23 15:04 PST |
	| delete  | -p no-preload-436000                              | no-preload-436000            | jenkins | v1.29.0 | 23 Feb 23 15:04 PST | 23 Feb 23 15:04 PST |
	| start   | -p                                                | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:04 PST | 23 Feb 23 15:05 PST |
	|         | default-k8s-diff-port-938000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:05 PST | 23 Feb 23 15:05 PST |
	|         | default-k8s-diff-port-938000                      |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:05 PST | 23 Feb 23 15:05 PST |
	|         | default-k8s-diff-port-938000                      |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-938000  | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:05 PST | 23 Feb 23 15:05 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:05 PST |                     |
	|         | default-k8s-diff-port-938000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 15:05:43
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 15:05:43.028184   34607 out.go:296] Setting OutFile to fd 1 ...
	I0223 15:05:43.028350   34607 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 15:05:43.028355   34607 out.go:309] Setting ErrFile to fd 2...
	I0223 15:05:43.028359   34607 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 15:05:43.028462   34607 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 15:05:43.029737   34607 out.go:303] Setting JSON to false
	I0223 15:05:43.048009   34607 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9317,"bootTime":1677184226,"procs":409,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 15:05:43.048098   34607 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 15:05:43.070077   34607 out.go:177] * [default-k8s-diff-port-938000] minikube v1.29.0 on Darwin 13.2
	I0223 15:05:43.113119   34607 notify.go:220] Checking for updates...
	I0223 15:05:43.135058   34607 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 15:05:43.156989   34607 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 15:05:43.179110   34607 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 15:05:43.201007   34607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 15:05:43.222657   34607 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 15:05:43.243796   34607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 15:05:43.265126   34607 config.go:182] Loaded profile config "default-k8s-diff-port-938000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 15:05:43.265502   34607 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 15:05:43.325914   34607 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 15:05:43.326086   34607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 15:05:43.466740   34607 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 23:05:43.375447319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 15:05:43.510277   34607 out.go:177] * Using the docker driver based on existing profile
	I0223 15:05:43.531527   34607 start.go:296] selected driver: docker
	I0223 15:05:43.531551   34607 start.go:857] validating driver "docker" against &{Name:default-k8s-diff-port-938000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-938000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:05:43.531701   34607 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 15:05:43.535522   34607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 15:05:43.677498   34607 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 23:05:43.585595099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 15:05:43.677658   34607 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 15:05:43.677679   34607 cni.go:84] Creating CNI manager for ""
	I0223 15:05:43.677691   34607 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:05:43.677700   34607 start_flags.go:319] config:
	{Name:default-k8s-diff-port-938000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-938000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:05:43.721417   34607 out.go:177] * Starting control plane node default-k8s-diff-port-938000 in cluster default-k8s-diff-port-938000
	I0223 15:05:43.742264   34607 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 15:05:43.763219   34607 out.go:177] * Pulling base image ...
	I0223 15:05:43.805469   34607 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 15:05:43.805528   34607 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 15:05:43.805582   34607 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 15:05:43.805605   34607 cache.go:57] Caching tarball of preloaded images
	I0223 15:05:43.805844   34607 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 15:05:43.805863   34607 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 15:05:43.806941   34607 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/config.json ...
	I0223 15:05:43.862084   34607 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 15:05:43.862115   34607 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 15:05:43.862136   34607 cache.go:193] Successfully downloaded all kic artifacts
	I0223 15:05:43.862191   34607 start.go:364] acquiring machines lock for default-k8s-diff-port-938000: {Name:mkcd78ef17512a8a7c3d48b54df3701531059948 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 15:05:43.862289   34607 start.go:368] acquired machines lock for "default-k8s-diff-port-938000" in 77.09µs
	I0223 15:05:43.862316   34607 start.go:96] Skipping create...Using existing machine configuration
	I0223 15:05:43.862325   34607 fix.go:55] fixHost starting: 
	I0223 15:05:43.862582   34607 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-938000 --format={{.State.Status}}
	I0223 15:05:43.919481   34607 fix.go:103] recreateIfNeeded on default-k8s-diff-port-938000: state=Stopped err=<nil>
	W0223 15:05:43.919510   34607 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 15:05:43.963073   34607 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-938000" ...
	I0223 15:05:44.004936   34607 cli_runner.go:164] Run: docker start default-k8s-diff-port-938000
	I0223 15:05:44.354831   34607 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-938000 --format={{.State.Status}}
	I0223 15:05:44.414562   34607 kic.go:426] container "default-k8s-diff-port-938000" state is running.
	I0223 15:05:44.415131   34607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-938000
	I0223 15:05:44.475828   34607 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/config.json ...
	I0223 15:05:44.476353   34607 machine.go:88] provisioning docker machine ...
	I0223 15:05:44.476399   34607 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-938000"
	I0223 15:05:44.476498   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:44.537538   34607 main.go:141] libmachine: Using SSH client type: native
	I0223 15:05:44.537955   34607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62501 <nil> <nil>}
	I0223 15:05:44.537970   34607 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-938000 && echo "default-k8s-diff-port-938000" | sudo tee /etc/hostname
	I0223 15:05:44.687775   34607 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-938000
	
	I0223 15:05:44.687872   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:44.747788   34607 main.go:141] libmachine: Using SSH client type: native
	I0223 15:05:44.748153   34607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62501 <nil> <nil>}
	I0223 15:05:44.748174   34607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-938000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-938000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-938000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 15:05:44.881478   34607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 15:05:44.881507   34607 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 15:05:44.881524   34607 ubuntu.go:177] setting up certificates
	I0223 15:05:44.881531   34607 provision.go:83] configureAuth start
	I0223 15:05:44.881605   34607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-938000
	I0223 15:05:44.939461   34607 provision.go:138] copyHostCerts
	I0223 15:05:44.939573   34607 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 15:05:44.939583   34607 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 15:05:44.939686   34607 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 15:05:44.939895   34607 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 15:05:44.939901   34607 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 15:05:44.939965   34607 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 15:05:44.940112   34607 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 15:05:44.940119   34607 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 15:05:44.940180   34607 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 15:05:44.940302   34607 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-938000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-938000]
	I0223 15:05:45.001939   34607 provision.go:172] copyRemoteCerts
	I0223 15:05:45.001991   34607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 15:05:45.002039   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:45.059145   34607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62501 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/default-k8s-diff-port-938000/id_rsa Username:docker}
	I0223 15:05:45.154094   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 15:05:45.171236   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0223 15:05:45.188415   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 15:05:45.205711   34607 provision.go:86] duration metric: configureAuth took 324.155502ms
	I0223 15:05:45.205729   34607 ubuntu.go:193] setting minikube options for container-runtime
	I0223 15:05:45.205887   34607 config.go:182] Loaded profile config "default-k8s-diff-port-938000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 15:05:45.205957   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:45.264884   34607 main.go:141] libmachine: Using SSH client type: native
	I0223 15:05:45.265245   34607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62501 <nil> <nil>}
	I0223 15:05:45.265256   34607 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 15:05:45.399947   34607 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 15:05:45.399967   34607 ubuntu.go:71] root file system type: overlay
	I0223 15:05:45.400073   34607 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 15:05:45.400160   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:45.457816   34607 main.go:141] libmachine: Using SSH client type: native
	I0223 15:05:45.458171   34607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62501 <nil> <nil>}
	I0223 15:05:45.458221   34607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 15:05:45.600131   34607 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 15:05:45.600221   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:45.657299   34607 main.go:141] libmachine: Using SSH client type: native
	I0223 15:05:45.657681   34607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62501 <nil> <nil>}
	I0223 15:05:45.657695   34607 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 15:05:45.794458   34607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 15:05:45.794473   34607 machine.go:91] provisioned docker machine in 1.318070419s
	I0223 15:05:45.794484   34607 start.go:300] post-start starting for "default-k8s-diff-port-938000" (driver="docker")
	I0223 15:05:45.794490   34607 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 15:05:45.794563   34607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 15:05:45.794615   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:45.851671   34607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62501 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/default-k8s-diff-port-938000/id_rsa Username:docker}
	I0223 15:05:45.948224   34607 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 15:05:45.951769   34607 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 15:05:45.951789   34607 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 15:05:45.951797   34607 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 15:05:45.951802   34607 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 15:05:45.951810   34607 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 15:05:45.951914   34607 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 15:05:45.952086   34607 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 15:05:45.952275   34607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 15:05:45.959549   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 15:05:45.976576   34607 start.go:303] post-start completed in 182.078242ms
	I0223 15:05:45.976648   34607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 15:05:45.976705   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:46.034008   34607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62501 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/default-k8s-diff-port-938000/id_rsa Username:docker}
	I0223 15:05:46.126690   34607 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 15:05:46.131667   34607 fix.go:57] fixHost completed within 2.269271406s
	I0223 15:05:46.131687   34607 start.go:83] releasing machines lock for "default-k8s-diff-port-938000", held for 2.269324889s
	I0223 15:05:46.131777   34607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-938000
	I0223 15:05:46.189172   34607 ssh_runner.go:195] Run: cat /version.json
	I0223 15:05:46.189209   34607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 15:05:46.189253   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:46.189295   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:46.250566   34607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62501 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/default-k8s-diff-port-938000/id_rsa Username:docker}
	I0223 15:05:46.250876   34607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62501 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/default-k8s-diff-port-938000/id_rsa Username:docker}
	I0223 15:05:46.395220   34607 ssh_runner.go:195] Run: systemctl --version
	I0223 15:05:46.400429   34607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 15:05:46.405571   34607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 15:05:46.420679   34607 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 15:05:46.420753   34607 ssh_runner.go:195] Run: which cri-dockerd
	I0223 15:05:46.424754   34607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 15:05:46.432032   34607 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 15:05:46.444687   34607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 15:05:46.452283   34607 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0223 15:05:46.452297   34607 start.go:485] detecting cgroup driver to use...
	I0223 15:05:46.452309   34607 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 15:05:46.452391   34607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 15:05:46.465212   34607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 15:05:46.473772   34607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 15:05:46.482053   34607 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 15:05:46.482113   34607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 15:05:46.490434   34607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 15:05:46.498963   34607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 15:05:46.507417   34607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 15:05:46.515555   34607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 15:05:46.523280   34607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 15:05:46.531913   34607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 15:05:46.539062   34607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 15:05:46.546143   34607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:05:46.615095   34607 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 15:05:46.688796   34607 start.go:485] detecting cgroup driver to use...
	I0223 15:05:46.688816   34607 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 15:05:46.688881   34607 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 15:05:46.703011   34607 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 15:05:46.703076   34607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 15:05:46.713304   34607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 15:05:46.726987   34607 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 15:05:46.830262   34607 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 15:05:46.930869   34607 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 15:05:46.930886   34607 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 15:05:46.944491   34607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:05:47.021071   34607 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 15:05:47.278524   34607 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 15:05:47.343564   34607 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 15:05:47.412498   34607 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 15:05:47.475551   34607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:05:47.539436   34607 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 15:05:47.559098   34607 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 15:05:47.559225   34607 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 15:05:47.563258   34607 start.go:553] Will wait 60s for crictl version
	I0223 15:05:47.563302   34607 ssh_runner.go:195] Run: which crictl
	I0223 15:05:47.567277   34607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 15:05:47.667418   34607 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 15:05:47.667505   34607 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 15:05:47.692833   34607 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 15:05:47.766620   34607 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 15:05:47.766748   34607 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-938000 dig +short host.docker.internal
	I0223 15:05:47.877739   34607 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 15:05:47.877867   34607 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 15:05:47.882266   34607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 15:05:47.892234   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:47.949758   34607 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 15:05:47.949829   34607 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 15:05:47.969319   34607 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 15:05:47.969335   34607 docker.go:560] Images already preloaded, skipping extraction
	I0223 15:05:47.969408   34607 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 15:05:47.989234   34607 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 15:05:47.989253   34607 cache_images.go:84] Images are preloaded, skipping loading
	I0223 15:05:47.989338   34607 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 15:05:48.014373   34607 cni.go:84] Creating CNI manager for ""
	I0223 15:05:48.014390   34607 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:05:48.014408   34607 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 15:05:48.014424   34607 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-938000 NodeName:default-k8s-diff-port-938000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 15:05:48.014538   34607 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-938000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 15:05:48.014652   34607 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-938000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-938000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0223 15:05:48.014719   34607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 15:05:48.022667   34607 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 15:05:48.022731   34607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 15:05:48.030045   34607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (460 bytes)
	I0223 15:05:48.066881   34607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 15:05:48.079793   34607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0223 15:05:48.092621   34607 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 15:05:48.096497   34607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 15:05:48.106202   34607 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000 for IP: 192.168.76.2
	I0223 15:05:48.106219   34607 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 15:05:48.106398   34607 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 15:05:48.106457   34607 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 15:05:48.106555   34607 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.key
	I0223 15:05:48.106627   34607 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/apiserver.key.31bdca25
	I0223 15:05:48.106687   34607 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/proxy-client.key
	I0223 15:05:48.106919   34607 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 15:05:48.106959   34607 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 15:05:48.106971   34607 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 15:05:48.107007   34607 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 15:05:48.107065   34607 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 15:05:48.107095   34607 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 15:05:48.107163   34607 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 15:05:48.107796   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 15:05:48.125278   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 15:05:48.142681   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 15:05:48.159516   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 15:05:48.176280   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 15:05:48.193421   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 15:05:48.210674   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 15:05:48.227673   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 15:05:48.244884   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 15:05:48.261666   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 15:05:48.279252   34607 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 15:05:48.296310   34607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 15:05:48.308926   34607 ssh_runner.go:195] Run: openssl version
	I0223 15:05:48.314365   34607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 15:05:48.322373   34607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 15:05:48.326387   34607 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 15:05:48.326435   34607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 15:05:48.331773   34607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 15:05:48.339654   34607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 15:05:48.348035   34607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 15:05:48.352067   34607 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 15:05:48.352121   34607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 15:05:48.357438   34607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 15:05:48.365007   34607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 15:05:48.373127   34607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:05:48.377055   34607 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:05:48.377104   34607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:05:48.382459   34607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 15:05:48.389871   34607 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-938000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-938000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:05:48.389990   34607 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 15:05:48.408725   34607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 15:05:48.417072   34607 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 15:05:48.417099   34607 kubeadm.go:633] restartCluster start
	I0223 15:05:48.417157   34607 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 15:05:48.424056   34607 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:48.424122   34607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-938000
	I0223 15:05:48.482379   34607 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-938000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 15:05:48.482547   34607 kubeconfig.go:146] "default-k8s-diff-port-938000" context is missing from /Users/jenkins/minikube-integration/15909-14738/kubeconfig - will repair!
	I0223 15:05:48.482876   34607 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/kubeconfig: {Name:mk366c13f6069774a57c4d74123d5172c8522a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 15:05:48.484501   34607 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 15:05:48.492453   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:48.492529   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:48.501615   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:49.002101   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:49.002308   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:49.013338   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:49.503913   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:49.504043   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:49.514956   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:50.001729   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:50.001811   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:50.011571   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:50.503826   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:50.504065   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:50.515132   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:51.003190   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:51.003339   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:51.014690   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:51.502063   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:51.502141   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:51.511811   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:52.002526   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:52.002647   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:52.013908   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:52.501858   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:52.501999   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:52.512389   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:53.001952   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:53.002020   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:53.011609   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:53.502130   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:53.502360   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:53.513378   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:54.002261   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:54.002425   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:54.013262   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:54.502542   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:54.502626   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:54.512437   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:55.003926   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:55.004162   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:55.015018   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:55.503269   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:55.503501   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:55.514744   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:56.001967   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:56.002073   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:56.011819   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:56.503999   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:56.504260   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:56.515191   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:57.002305   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:57.002502   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:57.013101   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:57.501997   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:57.502123   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:57.511593   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:58.002177   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:58.002369   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:58.013354   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:59.363493   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:05:59.363675   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:05:58.503093   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:58.503242   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:58.514483   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:58.514494   34607 api_server.go:165] Checking apiserver status ...
	I0223 15:05:58.514545   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:05:58.523048   34607 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:58.523059   34607 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 15:05:58.523067   34607 kubeadm.go:1120] stopping kube-system containers ...
	I0223 15:05:58.523134   34607 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 15:05:58.544029   34607 docker.go:456] Stopping containers: [f842a1744719 fc91ea9e3a6f aabe5dca5343 9d9a2662634f 7e4e94f62794 05b5dc14cfc7 6b2ed5a1b365 4856f0e2c068 c56e8d4ce5fe 67ab8bfcb83d 6e366c1e7450 b1d9ad2a3026 7e3484daf807 362a40ac054b 4fe5019051cf ce1e720d0679]
	I0223 15:05:58.544116   34607 ssh_runner.go:195] Run: docker stop f842a1744719 fc91ea9e3a6f aabe5dca5343 9d9a2662634f 7e4e94f62794 05b5dc14cfc7 6b2ed5a1b365 4856f0e2c068 c56e8d4ce5fe 67ab8bfcb83d 6e366c1e7450 b1d9ad2a3026 7e3484daf807 362a40ac054b 4fe5019051cf ce1e720d0679
	I0223 15:05:58.563650   34607 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 15:05:58.574281   34607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 15:05:58.581968   34607 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 23 23:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 23 23:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Feb 23 23:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 23 23:04 /etc/kubernetes/scheduler.conf
	
	I0223 15:05:58.582033   34607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0223 15:05:58.589397   34607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0223 15:05:58.596951   34607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0223 15:05:58.604212   34607 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:58.604263   34607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0223 15:05:58.611259   34607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0223 15:05:58.618563   34607 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:05:58.618611   34607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0223 15:05:58.625701   34607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 15:05:58.633464   34607 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 15:05:58.633474   34607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:05:58.685432   34607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:05:59.022778   34607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:05:59.158196   34607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:05:59.219999   34607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:05:59.325564   34607 api_server.go:51] waiting for apiserver process to appear ...
	I0223 15:05:59.325634   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:05:59.835136   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:06:00.335889   34607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:06:00.347162   34607 api_server.go:71] duration metric: took 1.021571792s to wait for apiserver process to appear ...
	I0223 15:06:00.347187   34607 api_server.go:87] waiting for apiserver healthz status ...
	I0223 15:06:00.347212   34607 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62500/healthz ...
	I0223 15:06:00.348425   34607 api_server.go:268] stopped: https://127.0.0.1:62500/healthz: Get "https://127.0.0.1:62500/healthz": EOF
	I0223 15:06:00.848663   34607 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62500/healthz ...
	I0223 15:06:02.199782   34607 api_server.go:278] https://127.0.0.1:62500/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 15:06:02.199801   34607 api_server.go:102] status: https://127.0.0.1:62500/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 15:06:02.350710   34607 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62500/healthz ...
	I0223 15:06:02.357722   34607 api_server.go:278] https://127.0.0.1:62500/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 15:06:02.357739   34607 api_server.go:102] status: https://127.0.0.1:62500/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 15:06:02.850646   34607 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62500/healthz ...
	I0223 15:06:02.858164   34607 api_server.go:278] https://127.0.0.1:62500/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 15:06:02.858185   34607 api_server.go:102] status: https://127.0.0.1:62500/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 15:06:03.348805   34607 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62500/healthz ...
	I0223 15:06:03.354012   34607 api_server.go:278] https://127.0.0.1:62500/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 15:06:03.354028   34607 api_server.go:102] status: https://127.0.0.1:62500/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 15:06:03.848718   34607 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62500/healthz ...
	I0223 15:06:03.855245   34607 api_server.go:278] https://127.0.0.1:62500/healthz returned 200:
	ok
	I0223 15:06:03.861903   34607 api_server.go:140] control plane version: v1.26.1
	I0223 15:06:03.861914   34607 api_server.go:130] duration metric: took 3.514616082s to wait for apiserver health ...
	I0223 15:06:03.861920   34607 cni.go:84] Creating CNI manager for ""
	I0223 15:06:03.861929   34607 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:06:03.882498   34607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 15:06:03.904375   34607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 15:06:03.914302   34607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 15:06:03.927147   34607 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 15:06:03.934520   34607 system_pods.go:59] 8 kube-system pods found
	I0223 15:06:03.934534   34607 system_pods.go:61] "coredns-787d4945fb-kdxsj" [8129ab98-283d-4b71-b113-a40c130df84d] Running
	I0223 15:06:03.934540   34607 system_pods.go:61] "etcd-default-k8s-diff-port-938000" [b664435b-413d-4b0c-ac63-4f678b33a9ad] Running
	I0223 15:06:03.934544   34607 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-938000" [e52c4148-310d-47d0-9ea9-c2f5484b8f24] Running
	I0223 15:06:03.934548   34607 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-938000" [42d0ed89-0a9f-436d-a181-24b46f69a56f] Running
	I0223 15:06:03.934551   34607 system_pods.go:61] "kube-proxy-gzr6x" [c865b076-092d-4b10-ad0c-bd623d11f87f] Running
	I0223 15:06:03.934555   34607 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-938000" [7d419773-29f3-42cf-95fa-891bc0c4e4b8] Running
	I0223 15:06:03.934563   34607 system_pods.go:61] "metrics-server-7997d45854-hw4vj" [df503d98-8c5a-45bd-8716-49175be62d11] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 15:06:03.934568   34607 system_pods.go:61] "storage-provisioner" [8e406890-9c57-4dc6-a1c3-764c7538df0b] Running
	I0223 15:06:03.934572   34607 system_pods.go:74] duration metric: took 7.41531ms to wait for pod list to return data ...
	I0223 15:06:03.934578   34607 node_conditions.go:102] verifying NodePressure condition ...
	I0223 15:06:03.937785   34607 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 15:06:03.937800   34607 node_conditions.go:123] node cpu capacity is 6
	I0223 15:06:03.937812   34607 node_conditions.go:105] duration metric: took 3.229599ms to run NodePressure ...
	I0223 15:06:03.937839   34607 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:06:04.096229   34607 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 15:06:04.100473   34607 kubeadm.go:784] kubelet initialised
	I0223 15:06:04.100484   34607 kubeadm.go:785] duration metric: took 4.240557ms waiting for restarted kubelet to initialise ...
	I0223 15:06:04.100491   34607 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 15:06:04.105206   34607 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-kdxsj" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.110180   34607 pod_ready.go:92] pod "coredns-787d4945fb-kdxsj" in "kube-system" namespace has status "Ready":"True"
	I0223 15:06:04.110188   34607 pod_ready.go:81] duration metric: took 4.971279ms waiting for pod "coredns-787d4945fb-kdxsj" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.110194   34607 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-938000" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.121973   34607 pod_ready.go:92] pod "etcd-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:06:04.121986   34607 pod_ready.go:81] duration metric: took 11.780271ms waiting for pod "etcd-default-k8s-diff-port-938000" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.121995   34607 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-938000" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.127678   34607 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:06:04.127687   34607 pod_ready.go:81] duration metric: took 5.686615ms waiting for pod "kube-apiserver-default-k8s-diff-port-938000" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.127693   34607 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-938000" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.332172   34607 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:06:04.332183   34607 pod_ready.go:81] duration metric: took 204.477939ms waiting for pod "kube-controller-manager-default-k8s-diff-port-938000" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.332190   34607 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gzr6x" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.730705   34607 pod_ready.go:92] pod "kube-proxy-gzr6x" in "kube-system" namespace has status "Ready":"True"
	I0223 15:06:04.730719   34607 pod_ready.go:81] duration metric: took 398.513062ms waiting for pod "kube-proxy-gzr6x" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:04.730726   34607 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:07.138343   34607 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:09.638655   34607 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:12.138491   34607 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:14.139528   34607 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:16.638383   34607 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:18.639031   34607 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:19.139643   34607 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:06:19.139658   34607 pod_ready.go:81] duration metric: took 14.408510166s waiting for pod "kube-scheduler-default-k8s-diff-port-938000" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:19.139666   34607 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace to be "Ready" ...
	I0223 15:06:21.154251   34607 pod_ready.go:102] pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:23.155330   34607 pod_ready.go:102] pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:25.653840   34607 pod_ready.go:102] pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:27.655475   34607 pod_ready.go:102] pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:30.153938   34607 pod_ready.go:102] pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:32.155543   34607 pod_ready.go:102] pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:34.652079   34607 pod_ready.go:102] pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:36.653414   34607 pod_ready.go:102] pod "metrics-server-7997d45854-hw4vj" in "kube-system" namespace has status "Ready":"False"
	I0223 15:06:39.367106   33771 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 15:06:39.367332   33771 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 15:06:39.367346   33771 kubeadm.go:322] 
	I0223 15:06:39.367418   33771 kubeadm.go:322] Unfortunately, an error has occurred:
	I0223 15:06:39.367472   33771 kubeadm.go:322] 	timed out waiting for the condition
	I0223 15:06:39.367487   33771 kubeadm.go:322] 
	I0223 15:06:39.367527   33771 kubeadm.go:322] This error is likely caused by:
	I0223 15:06:39.367584   33771 kubeadm.go:322] 	- The kubelet is not running
	I0223 15:06:39.367694   33771 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 15:06:39.367703   33771 kubeadm.go:322] 
	I0223 15:06:39.367840   33771 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 15:06:39.367881   33771 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0223 15:06:39.367919   33771 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0223 15:06:39.367931   33771 kubeadm.go:322] 
	I0223 15:06:39.368053   33771 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 15:06:39.368126   33771 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0223 15:06:39.368200   33771 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0223 15:06:39.368239   33771 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0223 15:06:39.368301   33771 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0223 15:06:39.368333   33771 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0223 15:06:39.370619   33771 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 15:06:39.370685   33771 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 15:06:39.370785   33771 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0223 15:06:39.370874   33771 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 15:06:39.370958   33771 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 15:06:39.371017   33771 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 15:06:39.371035   33771 kubeadm.go:403] StartCluster complete in 8m4.140598534s
	I0223 15:06:39.371131   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:06:39.390321   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.390333   33771 logs.go:279] No container was found matching "kube-apiserver"
	I0223 15:06:39.390403   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:06:39.410851   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.410865   33771 logs.go:279] No container was found matching "etcd"
	I0223 15:06:39.410947   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:06:39.430090   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.430103   33771 logs.go:279] No container was found matching "coredns"
	I0223 15:06:39.430177   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:06:39.449615   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.449630   33771 logs.go:279] No container was found matching "kube-scheduler"
	I0223 15:06:39.449698   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:06:39.469693   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.469708   33771 logs.go:279] No container was found matching "kube-proxy"
	I0223 15:06:39.469779   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:06:39.488934   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.488950   33771 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 15:06:39.489033   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:06:39.508436   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.508449   33771 logs.go:279] No container was found matching "kindnet"
	I0223 15:06:39.508518   33771 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:06:39.534494   33771 logs.go:277] 0 containers: []
	W0223 15:06:39.534507   33771 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0223 15:06:39.534515   33771 logs.go:123] Gathering logs for Docker ...
	I0223 15:06:39.534523   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:06:39.556111   33771 logs.go:123] Gathering logs for container status ...
	I0223 15:06:39.556132   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:06:41.602211   33771 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046007391s)
	I0223 15:06:41.602325   33771 logs.go:123] Gathering logs for kubelet ...
	I0223 15:06:41.602332   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:06:41.639676   33771 logs.go:123] Gathering logs for dmesg ...
	I0223 15:06:41.639691   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:06:41.652035   33771 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:06:41.652050   33771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 15:06:41.706272   33771 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0223 15:06:41.706291   33771 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 15:06:41.706310   33771 out.go:239] * 
	W0223 15:06:41.706442   33771 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 15:06:41.706458   33771 out.go:239] * 
	W0223 15:06:41.707066   33771 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 15:06:41.771711   33771 out.go:177] 
	W0223 15:06:41.813793   33771 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 15:06:41.813856   33771 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 15:06:41.813887   33771 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 15:06:41.834757   33771 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 22:58:31 UTC, end at Thu 2023-02-23 23:06:43 UTC. --
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361306771Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361678292Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361745336Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363177871Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363230712Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363257014Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363271466Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363298828Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363323522Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363351730Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363374761Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363404400Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363559257Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363684527Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363723678Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.364099630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.372604384Z" level=info msg="Loading containers: start."
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.450240223Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.483107583Z" level=info msg="Loading containers: done."
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.491003119Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.491073591Z" level=info msg="Daemon has completed initialization"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.511892551Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.515536521Z" level=info msg="API listen on [::]:2376"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.520614220Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-23T23:06:45Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:06:45 up  2:35,  0 users,  load average: 0.62, 0.66, 0.99
	Linux old-k8s-version-919000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 22:58:31 UTC, end at Thu 2023-02-23 23:06:45 UTC. --
	Feb 23 23:06:44 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 23:06:44 old-k8s-version-919000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Feb 23 23:06:44 old-k8s-version-919000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 23:06:44 old-k8s-version-919000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 23:06:44 old-k8s-version-919000 kubelet[13933]: I0223 23:06:44.789185   13933 server.go:410] Version: v1.16.0
	Feb 23 23:06:44 old-k8s-version-919000 kubelet[13933]: I0223 23:06:44.789451   13933 plugins.go:100] No cloud provider specified.
	Feb 23 23:06:44 old-k8s-version-919000 kubelet[13933]: I0223 23:06:44.789487   13933 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 23:06:44 old-k8s-version-919000 kubelet[13933]: I0223 23:06:44.791297   13933 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 23:06:44 old-k8s-version-919000 kubelet[13933]: W0223 23:06:44.792091   13933 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 23:06:44 old-k8s-version-919000 kubelet[13933]: W0223 23:06:44.792163   13933 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 23:06:44 old-k8s-version-919000 kubelet[13933]: F0223 23:06:44.792187   13933 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 23:06:44 old-k8s-version-919000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 23:06:44 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 23:06:45 old-k8s-version-919000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Feb 23 23:06:45 old-k8s-version-919000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 23:06:45 old-k8s-version-919000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 23:06:45 old-k8s-version-919000 kubelet[13961]: I0223 23:06:45.535487   13961 server.go:410] Version: v1.16.0
	Feb 23 23:06:45 old-k8s-version-919000 kubelet[13961]: I0223 23:06:45.536002   13961 plugins.go:100] No cloud provider specified.
	Feb 23 23:06:45 old-k8s-version-919000 kubelet[13961]: I0223 23:06:45.536038   13961 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 23:06:45 old-k8s-version-919000 kubelet[13961]: I0223 23:06:45.537832   13961 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 23:06:45 old-k8s-version-919000 kubelet[13961]: W0223 23:06:45.540870   13961 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 23:06:45 old-k8s-version-919000 kubelet[13961]: W0223 23:06:45.540950   13961 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 23:06:45 old-k8s-version-919000 kubelet[13961]: F0223 23:06:45.540973   13961 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 23:06:45 old-k8s-version-919000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 23:06:45 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 15:06:45.491907   34741 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 2 (397.170465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-919000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (496.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:07:04.540950   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 15:07:06.249650   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:07:21.578593   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:07:22.795387   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
E0223 15:07:27.693767   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:07:45.282773   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
E0223 15:07:50.516637   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:08:22.438743   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:08:50.837176   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:09:13.568647   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:09:19.456143   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 15:09:23.260396   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:09:37.729611   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:10:05.431023   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:10:42.502186   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 15:10:44.194996   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 15:10:46.309878   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:11:25.494109   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:11:40.323622   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:12:04.555935   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 15:12:06.264979   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
E0223 15:12:07.243836   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:12:45.298829   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:12:50.531944   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:13:03.375191   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:13:22.446536   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:14:08.355230   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:14:19.462392   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 15:14:23.266940   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:14:37.737203   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:15:21.743466   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:21.748845   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:21.759045   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:21.779219   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:21.820176   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:21.900527   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:22.061994   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:22.382576   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:15:23.022761   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:24.303383   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:26.865799   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:31.986428   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:15:42.226883   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:15:44.200671   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:15:59.775651   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
E0223 15:16:02.708496   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
E0223 15:16:03.949756   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 2 (400.04879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-919000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-919000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-919000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f",
	        "Created": "2023-02-23T22:52:47.108009889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295370,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:58:31.462047896Z",
	            "FinishedAt": "2023-02-23T22:58:28.586447345Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hosts",
	        "LogPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f-json.log",
	        "Name": "/old-k8s-version-919000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-919000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-919000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-919000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-919000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-919000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7f0571f7cf360e3b17992c95713f7ea16dfa34d74d6177b2bc9da7d70e05cc8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62350"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62351"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62353"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62354"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e7f0571f7cf3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-919000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5b30451d4570",
	                        "old-k8s-version-919000"
	                    ],
	                    "NetworkID": "c7154bbdfe1ae896999b2fd2c462dec29ff61281e64aa32aac9e788f781af78c",
	                    "EndpointID": "39c19201631fccca791376922476d56d0e5ed13cd34e99506a6626afdd2a5781",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 2 (408.84406ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-919000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-919000 logs -n 25: (3.45557651s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:04 PST | 23 Feb 23 15:05 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:05 PST | 23 Feb 23 15:05 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:05 PST | 23 Feb 23 15:05 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-938000     | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:05 PST | 23 Feb 23 15:05 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:05 PST | 23 Feb 23 15:10 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-835000 --memory=2200 --alsologtostderr | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-835000           | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-835000                | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-835000 --memory=2200 --alsologtostderr | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:12 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-835000 sudo                            | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	| delete  | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	| start   | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:13 PST |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-057000          | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:13 PST | 23 Feb 23 15:13 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:13 PST | 23 Feb 23 15:13 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-057000               | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:13 PST | 23 Feb 23 15:13 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:13 PST |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 15:13:39
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 15:13:39.521864   35807 out.go:296] Setting OutFile to fd 1 ...
	I0223 15:13:39.522056   35807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 15:13:39.522061   35807 out.go:309] Setting ErrFile to fd 2...
	I0223 15:13:39.522064   35807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 15:13:39.522175   35807 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 15:13:39.523542   35807 out.go:303] Setting JSON to false
	I0223 15:13:39.541759   35807 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9793,"bootTime":1677184226,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 15:13:39.541826   35807 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 15:13:39.563671   35807 out.go:177] * [embed-certs-057000] minikube v1.29.0 on Darwin 13.2
	I0223 15:13:39.606032   35807 notify.go:220] Checking for updates...
	I0223 15:13:39.606075   35807 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 15:13:39.627901   35807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 15:13:39.649015   35807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 15:13:39.671032   35807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 15:13:39.692781   35807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 15:13:39.713961   35807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 15:13:39.734945   35807 config.go:182] Loaded profile config "embed-certs-057000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 15:13:39.735304   35807 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 15:13:39.800157   35807 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 15:13:39.800282   35807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 15:13:39.941963   35807 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 23:13:39.84988923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 15:13:39.963796   35807 out.go:177] * Using the docker driver based on existing profile
	I0223 15:13:39.985441   35807 start.go:296] selected driver: docker
	I0223 15:13:39.985473   35807 start.go:857] validating driver "docker" against &{Name:embed-certs-057000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-057000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:13:39.985633   35807 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 15:13:39.989470   35807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 15:13:40.130163   35807 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 23:13:40.03871469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 15:13:40.130334   35807 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 15:13:40.130355   35807 cni.go:84] Creating CNI manager for ""
	I0223 15:13:40.130371   35807 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:13:40.130381   35807 start_flags.go:319] config:
	{Name:embed-certs-057000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-057000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:13:40.173696   35807 out.go:177] * Starting control plane node embed-certs-057000 in cluster embed-certs-057000
	I0223 15:13:40.194929   35807 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 15:13:40.216911   35807 out.go:177] * Pulling base image ...
	I0223 15:13:40.258830   35807 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 15:13:40.258920   35807 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 15:13:40.258949   35807 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 15:13:40.258969   35807 cache.go:57] Caching tarball of preloaded images
	I0223 15:13:40.259179   35807 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 15:13:40.259200   35807 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 15:13:40.260096   35807 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/config.json ...
	I0223 15:13:40.315639   35807 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 15:13:40.315658   35807 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 15:13:40.315682   35807 cache.go:193] Successfully downloaded all kic artifacts
	I0223 15:13:40.315735   35807 start.go:364] acquiring machines lock for embed-certs-057000: {Name:mk154721afc5beb409bbb73851ee94a0bbebb00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 15:13:40.315822   35807 start.go:368] acquired machines lock for "embed-certs-057000" in 65.654µs
	I0223 15:13:40.315850   35807 start.go:96] Skipping create...Using existing machine configuration
	I0223 15:13:40.315859   35807 fix.go:55] fixHost starting: 
	I0223 15:13:40.316102   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:13:40.373893   35807 fix.go:103] recreateIfNeeded on embed-certs-057000: state=Stopped err=<nil>
	W0223 15:13:40.373940   35807 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 15:13:40.395737   35807 out.go:177] * Restarting existing docker container for "embed-certs-057000" ...
	I0223 15:13:40.416604   35807 cli_runner.go:164] Run: docker start embed-certs-057000
	I0223 15:13:40.740649   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:13:40.800162   35807 kic.go:426] container "embed-certs-057000" state is running.
	I0223 15:13:40.800750   35807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-057000
	I0223 15:13:40.860901   35807 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/config.json ...
	I0223 15:13:40.861336   35807 machine.go:88] provisioning docker machine ...
	I0223 15:13:40.861371   35807 ubuntu.go:169] provisioning hostname "embed-certs-057000"
	I0223 15:13:40.861452   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:40.922710   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:40.923118   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:40.923132   35807 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-057000 && echo "embed-certs-057000" | sudo tee /etc/hostname
	I0223 15:13:41.075688   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-057000
	
	I0223 15:13:41.075774   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:41.133211   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:41.133568   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:41.133581   35807 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-057000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-057000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-057000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 15:13:41.265140   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 15:13:41.265163   35807 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 15:13:41.265198   35807 ubuntu.go:177] setting up certificates
	I0223 15:13:41.265207   35807 provision.go:83] configureAuth start
	I0223 15:13:41.265309   35807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-057000
	I0223 15:13:41.323204   35807 provision.go:138] copyHostCerts
	I0223 15:13:41.323315   35807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 15:13:41.323329   35807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 15:13:41.323434   35807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 15:13:41.323640   35807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 15:13:41.323648   35807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 15:13:41.323708   35807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 15:13:41.323856   35807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 15:13:41.323861   35807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 15:13:41.323922   35807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 15:13:41.324047   35807 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.embed-certs-057000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-057000]
	I0223 15:13:41.380473   35807 provision.go:172] copyRemoteCerts
	I0223 15:13:41.380523   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 15:13:41.380576   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:41.437706   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:41.532086   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 15:13:41.549234   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0223 15:13:41.566038   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 15:13:41.582933   35807 provision.go:86] duration metric: configureAuth took 317.702759ms
	I0223 15:13:41.582947   35807 ubuntu.go:193] setting minikube options for container-runtime
	I0223 15:13:41.583114   35807 config.go:182] Loaded profile config "embed-certs-057000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 15:13:41.583177   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:41.639766   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:41.640140   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:41.640152   35807 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 15:13:41.773205   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 15:13:41.773219   35807 ubuntu.go:71] root file system type: overlay
	I0223 15:13:41.773303   35807 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 15:13:41.773390   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:41.830400   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:41.830746   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:41.830800   35807 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 15:13:41.972798   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 15:13:41.972894   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.031236   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:42.031605   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:42.031618   35807 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 15:13:42.169063   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 15:13:42.169081   35807 machine.go:91] provisioned docker machine in 1.307702966s
	I0223 15:13:42.169091   35807 start.go:300] post-start starting for "embed-certs-057000" (driver="docker")
	I0223 15:13:42.169097   35807 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 15:13:42.169175   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 15:13:42.169243   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.225899   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:42.322323   35807 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 15:13:42.325964   35807 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 15:13:42.325983   35807 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 15:13:42.325995   35807 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 15:13:42.325999   35807 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 15:13:42.326006   35807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 15:13:42.326090   35807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 15:13:42.326254   35807 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 15:13:42.326457   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 15:13:42.333934   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 15:13:42.350781   35807 start.go:303] post-start completed in 181.670372ms
	I0223 15:13:42.350875   35807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 15:13:42.350939   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.407493   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:42.500330   35807 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 15:13:42.504761   35807 fix.go:57] fixHost completed within 2.188843272s
	I0223 15:13:42.504778   35807 start.go:83] releasing machines lock for "embed-certs-057000", held for 2.188894254s
	I0223 15:13:42.504872   35807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-057000
	I0223 15:13:42.561647   35807 ssh_runner.go:195] Run: cat /version.json
	I0223 15:13:42.561684   35807 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 15:13:42.561725   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.561746   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.622303   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:42.622320   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:42.763888   35807 ssh_runner.go:195] Run: systemctl --version
	I0223 15:13:42.768523   35807 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 15:13:42.774047   35807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 15:13:42.789617   35807 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 15:13:42.789690   35807 ssh_runner.go:195] Run: which cri-dockerd
	I0223 15:13:42.793842   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 15:13:42.801557   35807 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 15:13:42.815158   35807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 15:13:42.823426   35807 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0223 15:13:42.823444   35807 start.go:485] detecting cgroup driver to use...
	I0223 15:13:42.823456   35807 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 15:13:42.823539   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 15:13:42.836519   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 15:13:42.845081   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 15:13:42.853668   35807 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 15:13:42.853726   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 15:13:42.862123   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 15:13:42.870597   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 15:13:42.879154   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 15:13:42.887556   35807 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 15:13:42.895481   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 15:13:42.904028   35807 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 15:13:42.911148   35807 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 15:13:42.918202   35807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:13:42.984522   35807 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 15:13:43.053034   35807 start.go:485] detecting cgroup driver to use...
	I0223 15:13:43.053054   35807 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 15:13:43.053124   35807 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 15:13:43.064423   35807 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 15:13:43.064494   35807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 15:13:43.074473   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 15:13:43.088934   35807 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 15:13:43.179420   35807 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 15:13:43.277388   35807 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 15:13:43.277409   35807 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 15:13:43.291053   35807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:13:43.382527   35807 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 15:13:43.654504   35807 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 15:13:43.725020   35807 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 15:13:43.794135   35807 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 15:13:43.864177   35807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:13:43.934934   35807 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 15:13:43.946639   35807 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 15:13:43.946722   35807 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 15:13:43.950745   35807 start.go:553] Will wait 60s for crictl version
	I0223 15:13:43.950796   35807 ssh_runner.go:195] Run: which crictl
	I0223 15:13:43.954525   35807 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 15:13:44.050655   35807 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 15:13:44.050733   35807 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 15:13:44.076000   35807 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 15:13:44.143442   35807 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 15:13:44.143686   35807 cli_runner.go:164] Run: docker exec -t embed-certs-057000 dig +short host.docker.internal
	I0223 15:13:44.251235   35807 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 15:13:44.251345   35807 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 15:13:44.255797   35807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 15:13:44.266303   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:44.325363   35807 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 15:13:44.325445   35807 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 15:13:44.346211   35807 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 15:13:44.346228   35807 docker.go:560] Images already preloaded, skipping extraction
	I0223 15:13:44.346326   35807 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 15:13:44.366675   35807 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 15:13:44.366700   35807 cache_images.go:84] Images are preloaded, skipping loading
	I0223 15:13:44.366773   35807 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 15:13:44.392743   35807 cni.go:84] Creating CNI manager for ""
	I0223 15:13:44.392760   35807 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:13:44.392777   35807 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 15:13:44.392795   35807 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-057000 NodeName:embed-certs-057000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 15:13:44.392915   35807 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-057000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 15:13:44.392985   35807 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-057000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-057000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 15:13:44.393049   35807 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 15:13:44.401120   35807 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 15:13:44.401178   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 15:13:44.408792   35807 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0223 15:13:44.421382   35807 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 15:13:44.434287   35807 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0223 15:13:44.447169   35807 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 15:13:44.450873   35807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 15:13:44.460553   35807 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000 for IP: 192.168.76.2
	I0223 15:13:44.460571   35807 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 15:13:44.460739   35807 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 15:13:44.460789   35807 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 15:13:44.460873   35807 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/client.key
	I0223 15:13:44.460966   35807 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/apiserver.key.31bdca25
	I0223 15:13:44.461026   35807 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/proxy-client.key
	I0223 15:13:44.461220   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 15:13:44.461264   35807 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 15:13:44.461277   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 15:13:44.461310   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 15:13:44.461344   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 15:13:44.461374   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 15:13:44.461473   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 15:13:44.462053   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 15:13:44.479155   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 15:13:44.495969   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 15:13:44.512770   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 15:13:44.529716   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 15:13:44.562646   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 15:13:44.580215   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 15:13:44.597613   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 15:13:44.615562   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 15:13:44.632918   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 15:13:44.649852   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 15:13:44.666603   35807 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 15:13:44.679500   35807 ssh_runner.go:195] Run: openssl version
	I0223 15:13:44.684873   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 15:13:44.693236   35807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 15:13:44.697504   35807 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 15:13:44.697553   35807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 15:13:44.702894   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 15:13:44.710098   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 15:13:44.718166   35807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 15:13:44.722106   35807 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 15:13:44.722157   35807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 15:13:44.727500   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 15:13:44.734850   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 15:13:44.742825   35807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:13:44.747023   35807 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:13:44.747066   35807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:13:44.752528   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 15:13:44.760122   35807 kubeadm.go:401] StartCluster: {Name:embed-certs-057000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-057000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:13:44.760231   35807 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 15:13:44.779475   35807 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 15:13:44.787272   35807 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 15:13:44.787289   35807 kubeadm.go:633] restartCluster start
	I0223 15:13:44.787357   35807 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 15:13:44.794751   35807 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:44.794816   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:44.853386   35807 kubeconfig.go:135] verify returned: extract IP: "embed-certs-057000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 15:13:44.853552   35807 kubeconfig.go:146] "embed-certs-057000" context is missing from /Users/jenkins/minikube-integration/15909-14738/kubeconfig - will repair!
	I0223 15:13:44.853872   35807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/kubeconfig: {Name:mk366c13f6069774a57c4d74123d5172c8522a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 15:13:44.855464   35807 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 15:13:44.863362   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:44.863416   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:44.872012   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:45.372081   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:45.372179   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:45.381200   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:45.874211   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:45.874364   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:45.885273   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:46.372301   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:46.372434   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:46.383239   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:46.873526   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:46.873740   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:46.884625   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:47.374228   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:47.374478   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:47.385529   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:47.872290   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:47.872512   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:47.883239   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:48.374159   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:48.374225   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:48.383777   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:48.873778   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:48.873985   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:48.884914   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:49.373053   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:49.373274   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:49.384268   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:49.873342   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:49.873532   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:49.884367   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:50.372825   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:50.372979   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:50.384366   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:50.873933   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:50.874064   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:50.885153   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:51.372389   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:51.372504   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:51.383739   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:51.872944   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:51.873079   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:51.884354   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:52.372366   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:52.372556   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:52.382910   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:52.872426   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:52.872633   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:52.883277   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:53.372820   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:53.372971   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:53.383912   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:53.872627   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:53.872704   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:53.882041   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.374219   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:54.374426   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:54.385855   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.874343   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:54.874511   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:54.885413   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.885425   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:54.885475   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:54.893716   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.893729   35807 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 15:13:54.893738   35807 kubeadm.go:1120] stopping kube-system containers ...
	I0223 15:13:54.893807   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 15:13:54.915044   35807 docker.go:456] Stopping containers: [cc50a060642a 558b8f0ad13f da7894d8d7c1 792e7a1537cc 68d629fd6e42 bb7bcaace72a ecdbf13fbbf3 48a3667de181 f48fe9277e7d 01e2ca7abe06 3db9cab81bb2 ea597afde2a9 a8175c789c55 bcc2a5478340 a4ceab48a41b dff58633fe1c]
	I0223 15:13:54.915135   35807 ssh_runner.go:195] Run: docker stop cc50a060642a 558b8f0ad13f da7894d8d7c1 792e7a1537cc 68d629fd6e42 bb7bcaace72a ecdbf13fbbf3 48a3667de181 f48fe9277e7d 01e2ca7abe06 3db9cab81bb2 ea597afde2a9 a8175c789c55 bcc2a5478340 a4ceab48a41b dff58633fe1c
	I0223 15:13:54.935104   35807 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 15:13:54.945801   35807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 15:13:54.953649   35807 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 23 23:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 23 23:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 23 23:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 23 23:12 /etc/kubernetes/scheduler.conf
	
	I0223 15:13:54.953709   35807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 15:13:54.961095   35807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 15:13:54.968479   35807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 15:13:54.975669   35807 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.975720   35807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0223 15:13:54.982774   35807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 15:13:54.990091   35807 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.990142   35807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0223 15:13:54.997145   35807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 15:13:55.004722   35807 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 15:13:55.004737   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.058020   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.652634   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.782130   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.842645   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.949987   35807 api_server.go:51] waiting for apiserver process to appear ...
	I0223 15:13:55.950059   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:13:56.459947   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:13:56.960289   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:13:56.975118   35807 api_server.go:71] duration metric: took 1.025108372s to wait for apiserver process to appear ...
	I0223 15:13:56.975140   35807 api_server.go:87] waiting for apiserver healthz status ...
	I0223 15:13:56.975158   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:13:56.976348   35807 api_server.go:268] stopped: https://127.0.0.1:63190/healthz: Get "https://127.0.0.1:63190/healthz": EOF
	I0223 15:13:57.476504   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:13:59.089344   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 15:13:59.089360   35807 api_server.go:102] status: https://127.0.0.1:63190/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 15:13:59.476815   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:13:59.483428   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 15:13:59.483443   35807 api_server.go:102] status: https://127.0.0.1:63190/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 15:13:59.976950   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:13:59.982272   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 15:13:59.982285   35807 api_server.go:102] status: https://127.0.0.1:63190/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 15:14:00.476589   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:14:00.483207   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 200:
	ok
	I0223 15:14:00.489926   35807 api_server.go:140] control plane version: v1.26.1
	I0223 15:14:00.489937   35807 api_server.go:130] duration metric: took 3.514703433s to wait for apiserver health ...
	I0223 15:14:00.489942   35807 cni.go:84] Creating CNI manager for ""
	I0223 15:14:00.489954   35807 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:14:00.511439   35807 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 15:14:00.532402   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 15:14:00.542474   35807 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 15:14:00.555557   35807 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 15:14:00.562386   35807 system_pods.go:59] 8 kube-system pods found
	I0223 15:14:00.562400   35807 system_pods.go:61] "coredns-787d4945fb-wcn9r" [9f4f5578-3ac6-440d-97eb-89d1b11f8a47] Running
	I0223 15:14:00.562404   35807 system_pods.go:61] "etcd-embed-certs-057000" [ddd642c0-a140-41a7-bbbd-87060ab43042] Running
	I0223 15:14:00.562408   35807 system_pods.go:61] "kube-apiserver-embed-certs-057000" [7c9ddf95-c988-4085-986a-054e9baa87cb] Running
	I0223 15:14:00.562415   35807 system_pods.go:61] "kube-controller-manager-embed-certs-057000" [600dbddf-4b1a-4049-9247-1ba49f5680cb] Running
	I0223 15:14:00.562420   35807 system_pods.go:61] "kube-proxy-mqfs7" [f5163c21-0a3f-45c3-b8a6-bcee2d37da73] Running
	I0223 15:14:00.562423   35807 system_pods.go:61] "kube-scheduler-embed-certs-057000" [dcfe8b51-0610-4947-b4db-04d6e156fd5a] Running
	I0223 15:14:00.562429   35807 system_pods.go:61] "metrics-server-7997d45854-2dqv2" [984b03f5-27f0-4b44-b72b-344cc5fc2005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 15:14:00.562433   35807 system_pods.go:61] "storage-provisioner" [905cca6a-8691-4cdf-9640-6f46de153555] Running
	I0223 15:14:00.562437   35807 system_pods.go:74] duration metric: took 6.87081ms to wait for pod list to return data ...
	I0223 15:14:00.562443   35807 node_conditions.go:102] verifying NodePressure condition ...
	I0223 15:14:00.565656   35807 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 15:14:00.565670   35807 node_conditions.go:123] node cpu capacity is 6
	I0223 15:14:00.565678   35807 node_conditions.go:105] duration metric: took 3.231415ms to run NodePressure ...
	I0223 15:14:00.565691   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:14:00.693500   35807 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 15:14:00.698255   35807 retry.go:31] will retry after 263.262845ms: kubelet not initialised
	I0223 15:14:00.966655   35807 retry.go:31] will retry after 257.806036ms: kubelet not initialised
	I0223 15:14:01.231562   35807 retry.go:31] will retry after 421.334816ms: kubelet not initialised
	I0223 15:14:01.658089   35807 retry.go:31] will retry after 928.576713ms: kubelet not initialised
	I0223 15:14:02.592517   35807 retry.go:31] will retry after 719.215583ms: kubelet not initialised
	I0223 15:14:03.318322   35807 kubeadm.go:784] kubelet initialised
	I0223 15:14:03.318334   35807 kubeadm.go:785] duration metric: took 2.624753023s waiting for restarted kubelet to initialise ...
	I0223 15:14:03.318342   35807 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 15:14:03.322734   35807 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-wcn9r" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.327881   35807 pod_ready.go:92] pod "coredns-787d4945fb-wcn9r" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:03.327890   35807 pod_ready.go:81] duration metric: took 5.143832ms waiting for pod "coredns-787d4945fb-wcn9r" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.327895   35807 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.332798   35807 pod_ready.go:92] pod "etcd-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:03.332806   35807 pod_ready.go:81] duration metric: took 4.905622ms waiting for pod "etcd-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.332811   35807 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.337292   35807 pod_ready.go:92] pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:03.337300   35807 pod_ready.go:81] duration metric: took 4.484385ms waiting for pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.337308   35807 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:05.351074   35807 pod_ready.go:102] pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:07.851683   35807 pod_ready.go:102] pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:09.848057   35807 pod_ready.go:92] pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:09.848071   35807 pod_ready.go:81] duration metric: took 6.510595764s waiting for pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:09.848078   35807 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqfs7" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:09.853117   35807 pod_ready.go:92] pod "kube-proxy-mqfs7" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:09.853126   35807 pod_ready.go:81] duration metric: took 5.018246ms waiting for pod "kube-proxy-mqfs7" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:09.853132   35807 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:11.863288   35807 pod_ready.go:102] pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:12.863940   35807 pod_ready.go:92] pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:12.863956   35807 pod_ready.go:81] duration metric: took 3.010743905s waiting for pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:12.863964   35807 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:14.876702   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:16.877932   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:19.379238   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:21.876452   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:24.377332   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:26.377420   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:28.876885   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:30.877451   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:33.376789   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:35.877129   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:37.877481   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:39.878385   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:42.377633   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:44.878779   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:47.377491   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:49.378343   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:51.877243   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:54.378999   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:56.879186   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:59.376754   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:01.378708   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:03.379359   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:05.877797   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:08.376404   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:10.377698   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:12.876817   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:14.877611   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:16.878364   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:19.377770   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:21.379239   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:23.876560   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:25.879561   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:28.377753   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:30.379884   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:32.878212   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:34.880040   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:37.377283   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:39.379094   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:41.878298   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:44.378394   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:46.878228   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:48.878724   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:50.879137   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:52.881046   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:55.378196   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:57.380255   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:59.888564   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:02.377873   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:04.881183   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:07.380603   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:09.880951   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:12.380810   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 22:58:31 UTC, end at Thu 2023-02-23 23:16:17 UTC. --
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361306771Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361678292Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361745336Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363177871Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363230712Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363257014Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363271466Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363298828Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363323522Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363351730Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363374761Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363404400Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363559257Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363684527Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363723678Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.364099630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.372604384Z" level=info msg="Loading containers: start."
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.450240223Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.483107583Z" level=info msg="Loading containers: done."
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.491003119Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.491073591Z" level=info msg="Daemon has completed initialization"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.511892551Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.515536521Z" level=info msg="API listen on [::]:2376"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.520614220Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-02-23T23:16:20Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:16:20 up  2:45,  0 users,  load average: 0.41, 1.10, 1.10
	Linux old-k8s-version-919000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 22:58:31 UTC, end at Thu 2023-02-23 23:16:20 UTC. --
	Feb 23 23:16:18 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 23:16:19 old-k8s-version-919000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Feb 23 23:16:19 old-k8s-version-919000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 23:16:19 old-k8s-version-919000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 23:16:19 old-k8s-version-919000 kubelet[24098]: I0223 23:16:19.560037   24098 server.go:410] Version: v1.16.0
	Feb 23 23:16:19 old-k8s-version-919000 kubelet[24098]: I0223 23:16:19.560318   24098 plugins.go:100] No cloud provider specified.
	Feb 23 23:16:19 old-k8s-version-919000 kubelet[24098]: I0223 23:16:19.560332   24098 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 23:16:19 old-k8s-version-919000 kubelet[24098]: I0223 23:16:19.562806   24098 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 23:16:19 old-k8s-version-919000 kubelet[24098]: W0223 23:16:19.564135   24098 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 23:16:19 old-k8s-version-919000 kubelet[24098]: W0223 23:16:19.564202   24098 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 23:16:19 old-k8s-version-919000 kubelet[24098]: F0223 23:16:19.564224   24098 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 23:16:19 old-k8s-version-919000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 23:16:19 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 23:16:20 old-k8s-version-919000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Feb 23 23:16:20 old-k8s-version-919000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 23:16:20 old-k8s-version-919000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 23:16:20 old-k8s-version-919000 kubelet[24128]: I0223 23:16:20.318820   24128 server.go:410] Version: v1.16.0
	Feb 23 23:16:20 old-k8s-version-919000 kubelet[24128]: I0223 23:16:20.319289   24128 plugins.go:100] No cloud provider specified.
	Feb 23 23:16:20 old-k8s-version-919000 kubelet[24128]: I0223 23:16:20.319326   24128 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 23:16:20 old-k8s-version-919000 kubelet[24128]: I0223 23:16:20.321050   24128 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 23:16:20 old-k8s-version-919000 kubelet[24128]: W0223 23:16:20.321805   24128 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 23:16:20 old-k8s-version-919000 kubelet[24128]: W0223 23:16:20.321875   24128 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 23:16:20 old-k8s-version-919000 kubelet[24128]: F0223 23:16:20.321899   24128 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 23:16:20 old-k8s-version-919000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 23:16:20 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 15:16:20.268861   36040 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 2 (395.689525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-919000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:16:40.330207   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:16:43.671856   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:17:04.563316   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 15:17:06.273696   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:17:27.716180   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:17:45.306959   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:17:50.537342   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:18:05.594042   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:18:22.455441   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 15:18:29.334895   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:19:19.469096   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 15:19:23.274350   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:19:37.744796   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:20:21.751367   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:20:44.208034   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:20:49.440471   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:20:59.782706   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
E0223 15:21:00.808794   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:21:03.956910   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:21:40.339633   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:22:04.571255   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
E0223 15:22:06.280089   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0223 15:22:27.723648   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62354/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:22:45.313039   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kubenet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:22:50.544970   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:23:22.462499   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:24:02.830364   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:24:19.477116   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:24:23.281911   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:24:37.752474   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:25:21.759001   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/default-k8s-diff-port-938000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0223 15:25:30.862901   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 2 (402.67157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-919000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-919000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-919000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.797µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-919000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-919000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-919000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f",
	        "Created": "2023-02-23T22:52:47.108009889Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295370,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T22:58:31.462047896Z",
	            "FinishedAt": "2023-02-23T22:58:28.586447345Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/hosts",
	        "LogPath": "/var/lib/docker/containers/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f/5b30451d45705a9a685c21947c3a0ff35451f28d40836b4bc4eca950a8b5ea6f-json.log",
	        "Name": "/old-k8s-version-919000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-919000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-919000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f-init/diff:/var/lib/docker/overlay2/312af7914f267135654023cac986639fda26bce0e9e16676c1ee839dedb36ea3/diff:/var/lib/docker/overlay2/9f5e778ea554e91a930e169d54cc3039a0f410153e0eb7fd2e44371431c5239c/diff:/var/lib/docker/overlay2/21fd88361fee5b30bab54c1a2fb3661a9258260808d03a0aa5e76d695c13e9fa/diff:/var/lib/docker/overlay2/d1a70ff42b514a48ede228bfd667a1ff44276a97ca8f8972c361fbe666dbf5af/diff:/var/lib/docker/overlay2/0b3e33b93dd83274708c0ed2f844269da0eaf9b93ced47324281f889f623961f/diff:/var/lib/docker/overlay2/41ba4ebf100466946a1c040dfafdebcd1a2c3735e7fae36f117a310a88d53f27/diff:/var/lib/docker/overlay2/61da3a41b7f242cdcb824df3019a74f4cce296b58f5eb98a12aafe0f881b0b28/diff:/var/lib/docker/overlay2/1bf8b92719375a9d8f097f598013684a7349d25f3ec4b2f39c33a05d4ac38e63/diff:/var/lib/docker/overlay2/6e25221474c86778a56dad511c236c16b7f32f46f432667d5734c1c823a29c04/diff:/var/lib/docker/overlay2/516ea8
fc57230e6987a437167604d02d4c86c90cc43e30c725ebb58b328c5b28/diff:/var/lib/docker/overlay2/773735ff5815c46111f85a6a2ed29eaba38131060daeaf31fcc6d190d54c8ad0/diff:/var/lib/docker/overlay2/54f6eaef84eb22a9bd4375e213ff3f1af4d87174a0636cd705161eb9f592e76a/diff:/var/lib/docker/overlay2/c5903c40eadd84761d888193a77e1732b778ef4a0f7c591242ddd1452659e9c5/diff:/var/lib/docker/overlay2/efe55213e0610967c4943095e3d2ddc820e6be3e9832f18c669f704ba5bfb804/diff:/var/lib/docker/overlay2/dd9ef0a255fcef6df1825ec2d2f78249bdd4d29ff9b169e2bac4ec68e17ea0b5/diff:/var/lib/docker/overlay2/a88591bbe843d595c945e5ddc61dc438e66750a9f27de8cecb25a581f644f63d/diff:/var/lib/docker/overlay2/5b7a9b283ffcce0a068b6d113f8160ebffa0023496e720c09b2230405cd98660/diff:/var/lib/docker/overlay2/ba1cd99628fbd2ee5537eb57211209b402707fd4927ab6f487db64a080b2bb40/diff:/var/lib/docker/overlay2/77e297c6446310bb550315eda2e71d0ed3596dcf59cf5f929ed16415a6e839e7/diff:/var/lib/docker/overlay2/b72a642a10b9b221f8dab95965c8d7ebf61439db1817d2a7e55e3351fb3bfa79/diff:/var/lib/d
ocker/overlay2/2c85849636b2636c39c1165674634052c165bf1671737807f9f84af9cdaec710/diff:/var/lib/docker/overlay2/d481e2df4e2fbb51c3c6548fe0e2d75c3bbc6867daeaeac559fea32b0969109d/diff:/var/lib/docker/overlay2/a4ba08d7c7be1aee5f1f8ab163c91e56cc270b23926e8e8f2d6d7baee1c4cd79/diff:/var/lib/docker/overlay2/1fc8aefb80213c58eee3e457fad1ed5e0860e5c7101a8c94babf2676372d8d40/diff:/var/lib/docker/overlay2/8156590a8e10d518427298740db8a2645d4864ce4cdab44568080a1bbec209ae/diff:/var/lib/docker/overlay2/de8e7a927a81ab8b0dca0aa9ad11fb89bc2e11a56bb179b2a2a9a16246ab957d/diff:/var/lib/docker/overlay2/b1a2174e26ac2948f2a988c58c45115f230d1168b148e07573537d88cd485d27/diff:/var/lib/docker/overlay2/99eb504e3cdd219c35b20f48bd3980b389a181a64d2061645d77daee9a632a1f/diff:/var/lib/docker/overlay2/f00c0c9d98f4688c7caa116c3bef509c2aeb87bc2be717c3b4dd213a9aa6e931/diff:/var/lib/docker/overlay2/3ccdd6f5db6e7677b32d1118b2389939576cec9399a2074953bde1f44d0ffc8a/diff:/var/lib/docker/overlay2/4c71c056a816d63d030c0fff4784f0102ebcef0ab5a658ffcbe0712ec24
a9613/diff:/var/lib/docker/overlay2/3f9f8c3d456e713700ebe7d9ce7bd0ccade1486538efc09ba938942358692d6b/diff:/var/lib/docker/overlay2/6493814c93da91c97a90a193105168493b20183da8ab0a899ea52d4e893b2c49/diff:/var/lib/docker/overlay2/ad9631f623b7b3422f0937ca422d90ee0fdec23f7e5f098ec6b4997b7f779fca/diff:/var/lib/docker/overlay2/c8c5afb62a7fd536950c0205b19e9ff902be1d0392649f2bd1fcd0c8c4bf964c/diff:/var/lib/docker/overlay2/50d49e0f668e585ab4a5eebae984f585c76a14adba7817457c17a6154185262b/diff:/var/lib/docker/overlay2/5d37263f7458b15a195a8fefcae668e9bb7464e180a3c490081f228be8dbc2e6/diff:/var/lib/docker/overlay2/e82d2914dc1ce857d9e4246cfe1f5fa67768dedcf273e555191da326b0b83966/diff:/var/lib/docker/overlay2/4b3559760284dc821c75387fbf41238bdcfa44c7949d784247228e1d190e8547/diff:/var/lib/docker/overlay2/3fd6c3231524b82c531a887996ca0c4ffd24fa733444aab8fbdbf802e09e49c3/diff:/var/lib/docker/overlay2/f79c36358a76fa00014ba7ec5a0c44b160ae24ed2130967de29343cc513cb2d0/diff:/var/lib/docker/overlay2/0628686e980f429d66d25561d57e7c1cbe5405
52c70cef7d15955c6c1ad1a369/diff",
	                "MergedDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/12a758cbacc32286cfb697e84377659c014b379508fef3d7d03c0d5247c9343f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-919000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-919000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-919000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-919000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7f0571f7cf360e3b17992c95713f7ea16dfa34d74d6177b2bc9da7d70e05cc8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62350"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62351"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62352"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62353"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62354"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e7f0571f7cf3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-919000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5b30451d4570",
	                        "old-k8s-version-919000"
	                    ],
	                    "NetworkID": "c7154bbdfe1ae896999b2fd2c462dec29ff61281e64aa32aac9e788f781af78c",
	                    "EndpointID": "39c19201631fccca791376922476d56d0e5ed13cd34e99506a6626afdd2a5781",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 2 (391.587222ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-919000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-919000 logs -n 25: (3.38003407s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-938000 | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | default-k8s-diff-port-938000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-835000 --memory=2200 --alsologtostderr | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-835000           | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-835000                | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:11 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-835000 --memory=2200 --alsologtostderr | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:11 PST | 23 Feb 23 15:12 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-835000 sudo                            | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	| delete  | -p newest-cni-835000                                 | newest-cni-835000            | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:12 PST |
	| start   | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:12 PST | 23 Feb 23 15:13 PST |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-057000          | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:13 PST | 23 Feb 23 15:13 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:13 PST | 23 Feb 23 15:13 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-057000               | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:13 PST | 23 Feb 23 15:13 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:13 PST | 23 Feb 23 15:22 PST |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-057000 sudo                           | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:23 PST | 23 Feb 23 15:23 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:23 PST | 23 Feb 23 15:23 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:23 PST | 23 Feb 23 15:23 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:23 PST | 23 Feb 23 15:23 PST |
	| delete  | -p embed-certs-057000                                | embed-certs-057000           | jenkins | v1.29.0 | 23 Feb 23 15:23 PST | 23 Feb 23 15:23 PST |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 15:13:39
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 15:13:39.521864   35807 out.go:296] Setting OutFile to fd 1 ...
	I0223 15:13:39.522056   35807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 15:13:39.522061   35807 out.go:309] Setting ErrFile to fd 2...
	I0223 15:13:39.522064   35807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 15:13:39.522175   35807 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 15:13:39.523542   35807 out.go:303] Setting JSON to false
	I0223 15:13:39.541759   35807 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9793,"bootTime":1677184226,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 15:13:39.541826   35807 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 15:13:39.563671   35807 out.go:177] * [embed-certs-057000] minikube v1.29.0 on Darwin 13.2
	I0223 15:13:39.606032   35807 notify.go:220] Checking for updates...
	I0223 15:13:39.606075   35807 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 15:13:39.627901   35807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 15:13:39.649015   35807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 15:13:39.671032   35807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 15:13:39.692781   35807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 15:13:39.713961   35807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 15:13:39.734945   35807 config.go:182] Loaded profile config "embed-certs-057000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 15:13:39.735304   35807 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 15:13:39.800157   35807 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 15:13:39.800282   35807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 15:13:39.941963   35807 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 23:13:39.84988923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 15:13:39.963796   35807 out.go:177] * Using the docker driver based on existing profile
	I0223 15:13:39.985441   35807 start.go:296] selected driver: docker
	I0223 15:13:39.985473   35807 start.go:857] validating driver "docker" against &{Name:embed-certs-057000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-057000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:13:39.985633   35807 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 15:13:39.989470   35807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 15:13:40.130163   35807 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 23:13:40.03871469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 15:13:40.130334   35807 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 15:13:40.130355   35807 cni.go:84] Creating CNI manager for ""
	I0223 15:13:40.130371   35807 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:13:40.130381   35807 start_flags.go:319] config:
	{Name:embed-certs-057000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-057000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:13:40.173696   35807 out.go:177] * Starting control plane node embed-certs-057000 in cluster embed-certs-057000
	I0223 15:13:40.194929   35807 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 15:13:40.216911   35807 out.go:177] * Pulling base image ...
	I0223 15:13:40.258830   35807 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 15:13:40.258920   35807 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 15:13:40.258949   35807 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 15:13:40.258969   35807 cache.go:57] Caching tarball of preloaded images
	I0223 15:13:40.259179   35807 preload.go:174] Found /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 15:13:40.259200   35807 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 15:13:40.260096   35807 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/config.json ...
	I0223 15:13:40.315639   35807 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 15:13:40.315658   35807 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 15:13:40.315682   35807 cache.go:193] Successfully downloaded all kic artifacts
	I0223 15:13:40.315735   35807 start.go:364] acquiring machines lock for embed-certs-057000: {Name:mk154721afc5beb409bbb73851ee94a0bbebb00c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 15:13:40.315822   35807 start.go:368] acquired machines lock for "embed-certs-057000" in 65.654µs
	I0223 15:13:40.315850   35807 start.go:96] Skipping create...Using existing machine configuration
	I0223 15:13:40.315859   35807 fix.go:55] fixHost starting: 
	I0223 15:13:40.316102   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:13:40.373893   35807 fix.go:103] recreateIfNeeded on embed-certs-057000: state=Stopped err=<nil>
	W0223 15:13:40.373940   35807 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 15:13:40.395737   35807 out.go:177] * Restarting existing docker container for "embed-certs-057000" ...
	I0223 15:13:40.416604   35807 cli_runner.go:164] Run: docker start embed-certs-057000
	I0223 15:13:40.740649   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:13:40.800162   35807 kic.go:426] container "embed-certs-057000" state is running.
	I0223 15:13:40.800750   35807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-057000
	I0223 15:13:40.860901   35807 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/config.json ...
	I0223 15:13:40.861336   35807 machine.go:88] provisioning docker machine ...
	I0223 15:13:40.861371   35807 ubuntu.go:169] provisioning hostname "embed-certs-057000"
	I0223 15:13:40.861452   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:40.922710   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:40.923118   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:40.923132   35807 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-057000 && echo "embed-certs-057000" | sudo tee /etc/hostname
	I0223 15:13:41.075688   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-057000
	
	I0223 15:13:41.075774   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:41.133211   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:41.133568   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:41.133581   35807 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-057000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-057000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-057000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 15:13:41.265140   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 15:13:41.265163   35807 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-14738/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-14738/.minikube}
	I0223 15:13:41.265198   35807 ubuntu.go:177] setting up certificates
	I0223 15:13:41.265207   35807 provision.go:83] configureAuth start
	I0223 15:13:41.265309   35807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-057000
	I0223 15:13:41.323204   35807 provision.go:138] copyHostCerts
	I0223 15:13:41.323315   35807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem, removing ...
	I0223 15:13:41.323329   35807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem
	I0223 15:13:41.323434   35807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.pem (1082 bytes)
	I0223 15:13:41.323640   35807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem, removing ...
	I0223 15:13:41.323648   35807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem
	I0223 15:13:41.323708   35807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/cert.pem (1123 bytes)
	I0223 15:13:41.323856   35807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem, removing ...
	I0223 15:13:41.323861   35807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem
	I0223 15:13:41.323922   35807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-14738/.minikube/key.pem (1675 bytes)
	I0223 15:13:41.324047   35807 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem org=jenkins.embed-certs-057000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-057000]
	I0223 15:13:41.380473   35807 provision.go:172] copyRemoteCerts
	I0223 15:13:41.380523   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 15:13:41.380576   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:41.437706   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:41.532086   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 15:13:41.549234   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0223 15:13:41.566038   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 15:13:41.582933   35807 provision.go:86] duration metric: configureAuth took 317.702759ms
	I0223 15:13:41.582947   35807 ubuntu.go:193] setting minikube options for container-runtime
	I0223 15:13:41.583114   35807 config.go:182] Loaded profile config "embed-certs-057000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 15:13:41.583177   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:41.639766   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:41.640140   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:41.640152   35807 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 15:13:41.773205   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 15:13:41.773219   35807 ubuntu.go:71] root file system type: overlay
	I0223 15:13:41.773303   35807 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 15:13:41.773390   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:41.830400   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:41.830746   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:41.830800   35807 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 15:13:41.972798   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 15:13:41.972894   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.031236   35807 main.go:141] libmachine: Using SSH client type: native
	I0223 15:13:42.031605   35807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63186 <nil> <nil>}
	I0223 15:13:42.031618   35807 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 15:13:42.169063   35807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 15:13:42.169081   35807 machine.go:91] provisioned docker machine in 1.307702966s
	I0223 15:13:42.169091   35807 start.go:300] post-start starting for "embed-certs-057000" (driver="docker")
	I0223 15:13:42.169097   35807 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 15:13:42.169175   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 15:13:42.169243   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.225899   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:42.322323   35807 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 15:13:42.325964   35807 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 15:13:42.325983   35807 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 15:13:42.325995   35807 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 15:13:42.325999   35807 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 15:13:42.326006   35807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/addons for local assets ...
	I0223 15:13:42.326090   35807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-14738/.minikube/files for local assets ...
	I0223 15:13:42.326254   35807 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem -> 152102.pem in /etc/ssl/certs
	I0223 15:13:42.326457   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 15:13:42.333934   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /etc/ssl/certs/152102.pem (1708 bytes)
	I0223 15:13:42.350781   35807 start.go:303] post-start completed in 181.670372ms
	I0223 15:13:42.350875   35807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 15:13:42.350939   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.407493   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:42.500330   35807 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 15:13:42.504761   35807 fix.go:57] fixHost completed within 2.188843272s
	I0223 15:13:42.504778   35807 start.go:83] releasing machines lock for "embed-certs-057000", held for 2.188894254s
	I0223 15:13:42.504872   35807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-057000
	I0223 15:13:42.561647   35807 ssh_runner.go:195] Run: cat /version.json
	I0223 15:13:42.561684   35807 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 15:13:42.561725   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.561746   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:42.622303   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:42.622320   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:13:42.763888   35807 ssh_runner.go:195] Run: systemctl --version
	I0223 15:13:42.768523   35807 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 15:13:42.774047   35807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 15:13:42.789617   35807 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 15:13:42.789690   35807 ssh_runner.go:195] Run: which cri-dockerd
	I0223 15:13:42.793842   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 15:13:42.801557   35807 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 15:13:42.815158   35807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 15:13:42.823426   35807 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0223 15:13:42.823444   35807 start.go:485] detecting cgroup driver to use...
	I0223 15:13:42.823456   35807 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 15:13:42.823539   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 15:13:42.836519   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 15:13:42.845081   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 15:13:42.853668   35807 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 15:13:42.853726   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 15:13:42.862123   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 15:13:42.870597   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 15:13:42.879154   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 15:13:42.887556   35807 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 15:13:42.895481   35807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 15:13:42.904028   35807 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 15:13:42.911148   35807 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 15:13:42.918202   35807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:13:42.984522   35807 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 15:13:43.053034   35807 start.go:485] detecting cgroup driver to use...
	I0223 15:13:43.053054   35807 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 15:13:43.053124   35807 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 15:13:43.064423   35807 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 15:13:43.064494   35807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 15:13:43.074473   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 15:13:43.088934   35807 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 15:13:43.179420   35807 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 15:13:43.277388   35807 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 15:13:43.277409   35807 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 15:13:43.291053   35807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:13:43.382527   35807 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 15:13:43.654504   35807 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 15:13:43.725020   35807 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 15:13:43.794135   35807 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 15:13:43.864177   35807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 15:13:43.934934   35807 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 15:13:43.946639   35807 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 15:13:43.946722   35807 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 15:13:43.950745   35807 start.go:553] Will wait 60s for crictl version
	I0223 15:13:43.950796   35807 ssh_runner.go:195] Run: which crictl
	I0223 15:13:43.954525   35807 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 15:13:44.050655   35807 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 15:13:44.050733   35807 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 15:13:44.076000   35807 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 15:13:44.143442   35807 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 15:13:44.143686   35807 cli_runner.go:164] Run: docker exec -t embed-certs-057000 dig +short host.docker.internal
	I0223 15:13:44.251235   35807 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 15:13:44.251345   35807 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 15:13:44.255797   35807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 15:13:44.266303   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:44.325363   35807 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 15:13:44.325445   35807 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 15:13:44.346211   35807 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 15:13:44.346228   35807 docker.go:560] Images already preloaded, skipping extraction
	I0223 15:13:44.346326   35807 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 15:13:44.366675   35807 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0223 15:13:44.366700   35807 cache_images.go:84] Images are preloaded, skipping loading
	I0223 15:13:44.366773   35807 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 15:13:44.392743   35807 cni.go:84] Creating CNI manager for ""
	I0223 15:13:44.392760   35807 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:13:44.392777   35807 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 15:13:44.392795   35807 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-057000 NodeName:embed-certs-057000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 15:13:44.392915   35807 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-057000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 15:13:44.392985   35807 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-057000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-057000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 15:13:44.393049   35807 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 15:13:44.401120   35807 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 15:13:44.401178   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 15:13:44.408792   35807 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0223 15:13:44.421382   35807 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 15:13:44.434287   35807 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0223 15:13:44.447169   35807 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0223 15:13:44.450873   35807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 15:13:44.460553   35807 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000 for IP: 192.168.76.2
	I0223 15:13:44.460571   35807 certs.go:186] acquiring lock for shared ca certs: {Name:mkd042e3451e4b14920a2306f1ed09ac35ec1a58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 15:13:44.460739   35807 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key
	I0223 15:13:44.460789   35807 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key
	I0223 15:13:44.460873   35807 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/client.key
	I0223 15:13:44.460966   35807 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/apiserver.key.31bdca25
	I0223 15:13:44.461026   35807 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/proxy-client.key
	I0223 15:13:44.461220   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem (1338 bytes)
	W0223 15:13:44.461264   35807 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210_empty.pem, impossibly tiny 0 bytes
	I0223 15:13:44.461277   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 15:13:44.461310   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/ca.pem (1082 bytes)
	I0223 15:13:44.461344   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/cert.pem (1123 bytes)
	I0223 15:13:44.461374   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/certs/key.pem (1675 bytes)
	I0223 15:13:44.461473   35807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem (1708 bytes)
	I0223 15:13:44.462053   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 15:13:44.479155   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 15:13:44.495969   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 15:13:44.512770   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/embed-certs-057000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 15:13:44.529716   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 15:13:44.562646   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 15:13:44.580215   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 15:13:44.597613   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0223 15:13:44.615562   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/ssl/certs/152102.pem --> /usr/share/ca-certificates/152102.pem (1708 bytes)
	I0223 15:13:44.632918   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 15:13:44.649852   35807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-14738/.minikube/certs/15210.pem --> /usr/share/ca-certificates/15210.pem (1338 bytes)
	I0223 15:13:44.666603   35807 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 15:13:44.679500   35807 ssh_runner.go:195] Run: openssl version
	I0223 15:13:44.684873   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15210.pem && ln -fs /usr/share/ca-certificates/15210.pem /etc/ssl/certs/15210.pem"
	I0223 15:13:44.693236   35807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15210.pem
	I0223 15:13:44.697504   35807 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 22:04 /usr/share/ca-certificates/15210.pem
	I0223 15:13:44.697553   35807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15210.pem
	I0223 15:13:44.702894   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15210.pem /etc/ssl/certs/51391683.0"
	I0223 15:13:44.710098   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/152102.pem && ln -fs /usr/share/ca-certificates/152102.pem /etc/ssl/certs/152102.pem"
	I0223 15:13:44.718166   35807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/152102.pem
	I0223 15:13:44.722106   35807 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 22:04 /usr/share/ca-certificates/152102.pem
	I0223 15:13:44.722157   35807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/152102.pem
	I0223 15:13:44.727500   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/152102.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 15:13:44.734850   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 15:13:44.742825   35807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:13:44.747023   35807 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 21:59 /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:13:44.747066   35807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 15:13:44.752528   35807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 15:13:44.760122   35807 kubeadm.go:401] StartCluster: {Name:embed-certs-057000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-057000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 15:13:44.760231   35807 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 15:13:44.779475   35807 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 15:13:44.787272   35807 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 15:13:44.787289   35807 kubeadm.go:633] restartCluster start
	I0223 15:13:44.787357   35807 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 15:13:44.794751   35807 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:44.794816   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:13:44.853386   35807 kubeconfig.go:135] verify returned: extract IP: "embed-certs-057000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 15:13:44.853552   35807 kubeconfig.go:146] "embed-certs-057000" context is missing from /Users/jenkins/minikube-integration/15909-14738/kubeconfig - will repair!
	I0223 15:13:44.853872   35807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/kubeconfig: {Name:mk366c13f6069774a57c4d74123d5172c8522a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 15:13:44.855464   35807 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 15:13:44.863362   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:44.863416   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:44.872012   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:45.372081   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:45.372179   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:45.381200   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:45.874211   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:45.874364   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:45.885273   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:46.372301   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:46.372434   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:46.383239   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:46.873526   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:46.873740   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:46.884625   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:47.374228   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:47.374478   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:47.385529   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:47.872290   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:47.872512   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:47.883239   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:48.374159   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:48.374225   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:48.383777   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:48.873778   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:48.873985   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:48.884914   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:49.373053   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:49.373274   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:49.384268   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:49.873342   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:49.873532   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:49.884367   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:50.372825   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:50.372979   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:50.384366   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:50.873933   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:50.874064   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:50.885153   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:51.372389   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:51.372504   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:51.383739   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:51.872944   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:51.873079   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:51.884354   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:52.372366   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:52.372556   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:52.382910   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:52.872426   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:52.872633   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:52.883277   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:53.372820   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:53.372971   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:53.383912   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:53.872627   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:53.872704   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:53.882041   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.374219   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:54.374426   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:54.385855   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.874343   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:54.874511   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:54.885413   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.885425   35807 api_server.go:165] Checking apiserver status ...
	I0223 15:13:54.885475   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 15:13:54.893716   35807 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.893729   35807 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 15:13:54.893738   35807 kubeadm.go:1120] stopping kube-system containers ...
	I0223 15:13:54.893807   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 15:13:54.915044   35807 docker.go:456] Stopping containers: [cc50a060642a 558b8f0ad13f da7894d8d7c1 792e7a1537cc 68d629fd6e42 bb7bcaace72a ecdbf13fbbf3 48a3667de181 f48fe9277e7d 01e2ca7abe06 3db9cab81bb2 ea597afde2a9 a8175c789c55 bcc2a5478340 a4ceab48a41b dff58633fe1c]
	I0223 15:13:54.915135   35807 ssh_runner.go:195] Run: docker stop cc50a060642a 558b8f0ad13f da7894d8d7c1 792e7a1537cc 68d629fd6e42 bb7bcaace72a ecdbf13fbbf3 48a3667de181 f48fe9277e7d 01e2ca7abe06 3db9cab81bb2 ea597afde2a9 a8175c789c55 bcc2a5478340 a4ceab48a41b dff58633fe1c
	I0223 15:13:54.935104   35807 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 15:13:54.945801   35807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 15:13:54.953649   35807 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 23 23:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 23 23:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 23 23:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 23 23:12 /etc/kubernetes/scheduler.conf
	
	I0223 15:13:54.953709   35807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0223 15:13:54.961095   35807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0223 15:13:54.968479   35807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0223 15:13:54.975669   35807 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.975720   35807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0223 15:13:54.982774   35807 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0223 15:13:54.990091   35807 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0223 15:13:54.990142   35807 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0223 15:13:54.997145   35807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 15:13:55.004722   35807 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 15:13:55.004737   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.058020   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.652634   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.782130   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.842645   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:13:55.949987   35807 api_server.go:51] waiting for apiserver process to appear ...
	I0223 15:13:55.950059   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:13:56.459947   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:13:56.960289   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:13:56.975118   35807 api_server.go:71] duration metric: took 1.025108372s to wait for apiserver process to appear ...
	I0223 15:13:56.975140   35807 api_server.go:87] waiting for apiserver healthz status ...
	I0223 15:13:56.975158   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:13:56.976348   35807 api_server.go:268] stopped: https://127.0.0.1:63190/healthz: Get "https://127.0.0.1:63190/healthz": EOF
	I0223 15:13:57.476504   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:13:59.089344   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 15:13:59.089360   35807 api_server.go:102] status: https://127.0.0.1:63190/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 15:13:59.476815   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:13:59.483428   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 15:13:59.483443   35807 api_server.go:102] status: https://127.0.0.1:63190/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 15:13:59.976950   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:13:59.982272   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 15:13:59.982285   35807 api_server.go:102] status: https://127.0.0.1:63190/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 15:14:00.476589   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:14:00.483207   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 200:
	ok
	I0223 15:14:00.489926   35807 api_server.go:140] control plane version: v1.26.1
	I0223 15:14:00.489937   35807 api_server.go:130] duration metric: took 3.514703433s to wait for apiserver health ...
	I0223 15:14:00.489942   35807 cni.go:84] Creating CNI manager for ""
	I0223 15:14:00.489954   35807 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:14:00.511439   35807 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 15:14:00.532402   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 15:14:00.542474   35807 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 15:14:00.555557   35807 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 15:14:00.562386   35807 system_pods.go:59] 8 kube-system pods found
	I0223 15:14:00.562400   35807 system_pods.go:61] "coredns-787d4945fb-wcn9r" [9f4f5578-3ac6-440d-97eb-89d1b11f8a47] Running
	I0223 15:14:00.562404   35807 system_pods.go:61] "etcd-embed-certs-057000" [ddd642c0-a140-41a7-bbbd-87060ab43042] Running
	I0223 15:14:00.562408   35807 system_pods.go:61] "kube-apiserver-embed-certs-057000" [7c9ddf95-c988-4085-986a-054e9baa87cb] Running
	I0223 15:14:00.562415   35807 system_pods.go:61] "kube-controller-manager-embed-certs-057000" [600dbddf-4b1a-4049-9247-1ba49f5680cb] Running
	I0223 15:14:00.562420   35807 system_pods.go:61] "kube-proxy-mqfs7" [f5163c21-0a3f-45c3-b8a6-bcee2d37da73] Running
	I0223 15:14:00.562423   35807 system_pods.go:61] "kube-scheduler-embed-certs-057000" [dcfe8b51-0610-4947-b4db-04d6e156fd5a] Running
	I0223 15:14:00.562429   35807 system_pods.go:61] "metrics-server-7997d45854-2dqv2" [984b03f5-27f0-4b44-b72b-344cc5fc2005] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 15:14:00.562433   35807 system_pods.go:61] "storage-provisioner" [905cca6a-8691-4cdf-9640-6f46de153555] Running
	I0223 15:14:00.562437   35807 system_pods.go:74] duration metric: took 6.87081ms to wait for pod list to return data ...
	I0223 15:14:00.562443   35807 node_conditions.go:102] verifying NodePressure condition ...
	I0223 15:14:00.565656   35807 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 15:14:00.565670   35807 node_conditions.go:123] node cpu capacity is 6
	I0223 15:14:00.565678   35807 node_conditions.go:105] duration metric: took 3.231415ms to run NodePressure ...
	I0223 15:14:00.565691   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 15:14:00.693500   35807 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 15:14:00.698255   35807 retry.go:31] will retry after 263.262845ms: kubelet not initialised
	I0223 15:14:00.966655   35807 retry.go:31] will retry after 257.806036ms: kubelet not initialised
	I0223 15:14:01.231562   35807 retry.go:31] will retry after 421.334816ms: kubelet not initialised
	I0223 15:14:01.658089   35807 retry.go:31] will retry after 928.576713ms: kubelet not initialised
	I0223 15:14:02.592517   35807 retry.go:31] will retry after 719.215583ms: kubelet not initialised
	I0223 15:14:03.318322   35807 kubeadm.go:784] kubelet initialised
	I0223 15:14:03.318334   35807 kubeadm.go:785] duration metric: took 2.624753023s waiting for restarted kubelet to initialise ...
	I0223 15:14:03.318342   35807 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 15:14:03.322734   35807 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-wcn9r" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.327881   35807 pod_ready.go:92] pod "coredns-787d4945fb-wcn9r" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:03.327890   35807 pod_ready.go:81] duration metric: took 5.143832ms waiting for pod "coredns-787d4945fb-wcn9r" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.327895   35807 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.332798   35807 pod_ready.go:92] pod "etcd-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:03.332806   35807 pod_ready.go:81] duration metric: took 4.905622ms waiting for pod "etcd-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.332811   35807 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.337292   35807 pod_ready.go:92] pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:03.337300   35807 pod_ready.go:81] duration metric: took 4.484385ms waiting for pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:03.337308   35807 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:05.351074   35807 pod_ready.go:102] pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:07.851683   35807 pod_ready.go:102] pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:09.848057   35807 pod_ready.go:92] pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:09.848071   35807 pod_ready.go:81] duration metric: took 6.510595764s waiting for pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:09.848078   35807 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mqfs7" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:09.853117   35807 pod_ready.go:92] pod "kube-proxy-mqfs7" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:09.853126   35807 pod_ready.go:81] duration metric: took 5.018246ms waiting for pod "kube-proxy-mqfs7" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:09.853132   35807 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:11.863288   35807 pod_ready.go:102] pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:12.863940   35807 pod_ready.go:92] pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:14:12.863956   35807 pod_ready.go:81] duration metric: took 3.010743905s waiting for pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:12.863964   35807 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace to be "Ready" ...
	I0223 15:14:14.876702   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:16.877932   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:19.379238   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:21.876452   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:24.377332   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:26.377420   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:28.876885   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:30.877451   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:33.376789   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:35.877129   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:37.877481   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:39.878385   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:42.377633   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:44.878779   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:47.377491   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:49.378343   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:51.877243   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:54.378999   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:56.879186   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:14:59.376754   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:01.378708   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:03.379359   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:05.877797   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:08.376404   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:10.377698   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:12.876817   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:14.877611   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:16.878364   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:19.377770   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:21.379239   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:23.876560   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:25.879561   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:28.377753   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:30.379884   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:32.878212   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:34.880040   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:37.377283   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:39.379094   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:41.878298   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:44.378394   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:46.878228   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:48.878724   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:50.879137   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:52.881046   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:55.378196   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:57.380255   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:15:59.888564   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:02.377873   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:04.881183   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:07.380603   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:09.880951   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:12.380810   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:14.877138   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:16.879091   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:19.379210   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:21.880420   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:23.881275   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:26.379425   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:28.380209   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:30.881217   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:33.381351   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:35.879849   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:38.379854   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:40.381125   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:42.879930   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:44.880048   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:46.880131   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:49.380735   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:51.879699   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:53.880817   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:56.380329   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:16:58.880898   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:01.380390   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:03.382387   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:05.881818   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:08.379144   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:10.383755   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:12.881948   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:15.382188   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:17.879942   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:19.881048   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:22.380926   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:24.381053   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:26.381528   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:28.880817   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:31.380860   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:33.882285   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:36.381456   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:38.381766   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:40.382305   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:42.382477   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:44.382775   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:46.883374   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:49.380757   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:51.882425   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:54.383535   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:56.881362   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:17:59.383753   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:01.881862   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:04.382661   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:06.383639   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:08.883256   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:11.383281   35807 pod_ready.go:102] pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:12.875178   35807 pod_ready.go:81] duration metric: took 4m0.005230886s waiting for pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace to be "Ready" ...
	E0223 15:18:12.875202   35807 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7997d45854-2dqv2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0223 15:18:12.875220   35807 pod_ready.go:38] duration metric: took 4m9.550662251s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 15:18:12.875244   35807 kubeadm.go:637] restartCluster took 4m28.081272806s
	W0223 15:18:12.875359   35807 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0223 15:18:12.875390   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0223 15:18:17.015714   35807 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.140179716s)
	I0223 15:18:17.015790   35807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 15:18:17.025896   35807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 15:18:17.033762   35807 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 15:18:17.033812   35807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 15:18:17.041356   35807 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 15:18:17.041384   35807 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 15:18:17.088336   35807 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 15:18:17.088390   35807 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 15:18:17.192213   35807 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 15:18:17.192301   35807 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 15:18:17.192388   35807 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 15:18:17.320739   35807 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 15:18:17.345523   35807 out.go:204]   - Generating certificates and keys ...
	I0223 15:18:17.345610   35807 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 15:18:17.345673   35807 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 15:18:17.345747   35807 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 15:18:17.345801   35807 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 15:18:17.345892   35807 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 15:18:17.345953   35807 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 15:18:17.346007   35807 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 15:18:17.346065   35807 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 15:18:17.346138   35807 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 15:18:17.346208   35807 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 15:18:17.346240   35807 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 15:18:17.346298   35807 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 15:18:17.632341   35807 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 15:18:17.703572   35807 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 15:18:17.816737   35807 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 15:18:18.027703   35807 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 15:18:18.039202   35807 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 15:18:18.039655   35807 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 15:18:18.039737   35807 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 15:18:18.114578   35807 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 15:18:18.136147   35807 out.go:204]   - Booting up control plane ...
	I0223 15:18:18.136279   35807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 15:18:18.136392   35807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 15:18:18.136451   35807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 15:18:18.136535   35807 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 15:18:18.136687   35807 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 15:18:23.123217   35807 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002471 seconds
	I0223 15:18:23.123395   35807 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 15:18:23.132689   35807 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 15:18:23.649323   35807 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 15:18:23.649485   35807 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-057000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 15:18:24.157508   35807 kubeadm.go:322] [bootstrap-token] Using token: zmcumw.iw12h7nm0l7d66ha
	I0223 15:18:24.196282   35807 out.go:204]   - Configuring RBAC rules ...
	I0223 15:18:24.196408   35807 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 15:18:24.198890   35807 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 15:18:24.238996   35807 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 15:18:24.241324   35807 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 15:18:24.243760   35807 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 15:18:24.245965   35807 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 15:18:24.254193   35807 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 15:18:24.389832   35807 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 15:18:24.647866   35807 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 15:18:24.648873   35807 kubeadm.go:322] 
	I0223 15:18:24.648957   35807 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 15:18:24.648967   35807 kubeadm.go:322] 
	I0223 15:18:24.649108   35807 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 15:18:24.649124   35807 kubeadm.go:322] 
	I0223 15:18:24.649156   35807 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 15:18:24.649226   35807 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 15:18:24.649280   35807 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 15:18:24.649287   35807 kubeadm.go:322] 
	I0223 15:18:24.649334   35807 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 15:18:24.649342   35807 kubeadm.go:322] 
	I0223 15:18:24.649385   35807 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 15:18:24.649394   35807 kubeadm.go:322] 
	I0223 15:18:24.649433   35807 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 15:18:24.649515   35807 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 15:18:24.649602   35807 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 15:18:24.649613   35807 kubeadm.go:322] 
	I0223 15:18:24.649692   35807 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 15:18:24.649766   35807 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 15:18:24.649777   35807 kubeadm.go:322] 
	I0223 15:18:24.649835   35807 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zmcumw.iw12h7nm0l7d66ha \
	I0223 15:18:24.649914   35807 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 \
	I0223 15:18:24.649938   35807 kubeadm.go:322] 	--control-plane 
	I0223 15:18:24.649947   35807 kubeadm.go:322] 
	I0223 15:18:24.650021   35807 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 15:18:24.650028   35807 kubeadm.go:322] 
	I0223 15:18:24.650090   35807 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zmcumw.iw12h7nm0l7d66ha \
	I0223 15:18:24.650203   35807 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dc114a02ba7243eac062ae433b8dd3c4a63e42a63011fc73e64e6e2ba1098722 
	I0223 15:18:24.653027   35807 kubeadm.go:322] W0223 23:18:17.082998    9044 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 15:18:24.653172   35807 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 15:18:24.653289   35807 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 15:18:24.653304   35807 cni.go:84] Creating CNI manager for ""
	I0223 15:18:24.653324   35807 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 15:18:24.675212   35807 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 15:18:24.749977   35807 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 15:18:24.758373   35807 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 15:18:24.773254   35807 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 15:18:24.773356   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:24.773358   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=75fb585b5a7c9fbcfcf10e5f8e856de2145fcfc0 minikube.k8s.io/name=embed-certs-057000 minikube.k8s.io/updated_at=2023_02_23T15_18_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:24.782764   35807 ops.go:34] apiserver oom_adj: -16
	I0223 15:18:24.860015   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:25.422420   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:25.924217   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:26.422114   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:26.923371   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:27.423142   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:27.922586   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:28.424347   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:28.924022   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:29.422840   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:29.923187   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:30.423065   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:30.923242   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:31.422841   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:31.923064   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:32.424332   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:32.923110   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:33.422678   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:33.922936   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:34.423032   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:34.922892   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:35.424403   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:35.923292   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:36.422314   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:36.922459   35807 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 15:18:36.988689   35807 kubeadm.go:1073] duration metric: took 12.215114936s to wait for elevateKubeSystemPrivileges.
	I0223 15:18:36.988709   35807 kubeadm.go:403] StartCluster complete in 4m52.221323472s
	I0223 15:18:36.988729   35807 settings.go:142] acquiring lock: {Name:mk5254606ab776d081c4c857df8d4e00b86fee57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 15:18:36.988821   35807 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 15:18:36.989567   35807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/kubeconfig: {Name:mk366c13f6069774a57c4d74123d5172c8522a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 15:18:36.989831   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 15:18:36.989856   35807 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 15:18:36.989967   35807 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-057000"
	I0223 15:18:36.989971   35807 addons.go:65] Setting default-storageclass=true in profile "embed-certs-057000"
	I0223 15:18:36.989974   35807 addons.go:65] Setting dashboard=true in profile "embed-certs-057000"
	I0223 15:18:36.989990   35807 addons.go:227] Setting addon dashboard=true in "embed-certs-057000"
	W0223 15:18:36.989997   35807 addons.go:236] addon dashboard should already be in state true
	I0223 15:18:36.989997   35807 addons.go:227] Setting addon storage-provisioner=true in "embed-certs-057000"
	I0223 15:18:36.989996   35807 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-057000"
	W0223 15:18:36.990004   35807 addons.go:236] addon storage-provisioner should already be in state true
	I0223 15:18:36.989997   35807 addons.go:65] Setting metrics-server=true in profile "embed-certs-057000"
	I0223 15:18:36.990011   35807 config.go:182] Loaded profile config "embed-certs-057000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 15:18:36.990021   35807 addons.go:227] Setting addon metrics-server=true in "embed-certs-057000"
	W0223 15:18:36.990029   35807 addons.go:236] addon metrics-server should already be in state true
	I0223 15:18:36.990038   35807 host.go:66] Checking if "embed-certs-057000" exists ...
	I0223 15:18:36.990038   35807 host.go:66] Checking if "embed-certs-057000" exists ...
	I0223 15:18:36.990068   35807 host.go:66] Checking if "embed-certs-057000" exists ...
	I0223 15:18:36.990316   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:18:36.990404   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:18:36.990474   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:18:36.993698   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:18:37.131590   35807 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 15:18:37.095660   35807 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0223 15:18:37.154460   35807 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 15:18:37.228574   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 15:18:37.191588   35807 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0223 15:18:37.194975   35807 addons.go:227] Setting addon default-storageclass=true in "embed-certs-057000"
	I0223 15:18:37.198527   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 15:18:37.228676   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	W0223 15:18:37.265689   35807 addons.go:236] addon default-storageclass should already be in state true
	I0223 15:18:37.265854   35807 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0223 15:18:37.303602   35807 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0223 15:18:37.303638   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0223 15:18:37.303667   35807 host.go:66] Checking if "embed-certs-057000" exists ...
	I0223 15:18:37.340752   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0223 15:18:37.340768   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0223 15:18:37.340808   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:18:37.340835   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:18:37.344461   35807 cli_runner.go:164] Run: docker container inspect embed-certs-057000 --format={{.State.Status}}
	I0223 15:18:37.376162   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:18:37.423843   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:18:37.423957   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:18:37.424585   35807 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 15:18:37.424594   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 15:18:37.424664   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:18:37.490782   35807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63186 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/embed-certs-057000/id_rsa Username:docker}
	I0223 15:18:37.546911   35807 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-057000" context rescaled to 1 replicas
	I0223 15:18:37.546946   35807 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 15:18:37.560566   35807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 15:18:37.570531   35807 out.go:177] * Verifying Kubernetes components...
	I0223 15:18:37.592172   35807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 15:18:37.655240   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0223 15:18:37.655260   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0223 15:18:37.656480   35807 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0223 15:18:37.656497   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0223 15:18:37.676767   35807 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0223 15:18:37.676783   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0223 15:18:37.752061   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0223 15:18:37.752078   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0223 15:18:37.753559   35807 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0223 15:18:37.753574   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0223 15:18:37.764209   35807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 15:18:37.769815   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0223 15:18:37.769831   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0223 15:18:37.772917   35807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0223 15:18:37.855493   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0223 15:18:37.855538   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0223 15:18:37.954976   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0223 15:18:37.954995   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0223 15:18:38.050090   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0223 15:18:38.050108   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0223 15:18:38.079311   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0223 15:18:38.079329   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0223 15:18:38.165137   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0223 15:18:38.165156   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0223 15:18:38.181045   35807 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0223 15:18:38.181061   35807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0223 15:18:38.256070   35807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0223 15:18:38.672883   35807 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.407135461s)
	I0223 15:18:38.672915   35807 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0223 15:18:38.846034   35807 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.275446685s)
	I0223 15:18:38.846088   35807 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.253851874s)
	I0223 15:18:38.846141   35807 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.081878179s)
	I0223 15:18:38.846234   35807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-057000
	I0223 15:18:38.910575   35807 node_ready.go:35] waiting up to 6m0s for node "embed-certs-057000" to be "Ready" ...
	I0223 15:18:38.952951   35807 node_ready.go:49] node "embed-certs-057000" has status "Ready":"True"
	I0223 15:18:38.952965   35807 node_ready.go:38] duration metric: took 42.365559ms waiting for node "embed-certs-057000" to be "Ready" ...
	I0223 15:18:38.952974   35807 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 15:18:38.960580   35807 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-b72v8" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:38.983198   35807 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.210226725s)
	I0223 15:18:38.983228   35807 addons.go:457] Verifying addon metrics-server=true in "embed-certs-057000"
	I0223 15:18:39.945760   35807 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.689612688s)
	I0223 15:18:39.969226   35807 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-057000 addons enable metrics-server	
	
	
	I0223 15:18:40.041884   35807 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0223 15:18:40.116122   35807 addons.go:492] enable addons completed in 3.126193827s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0223 15:18:40.973320   35807 pod_ready.go:102] pod "coredns-787d4945fb-b72v8" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:42.974864   35807 pod_ready.go:92] pod "coredns-787d4945fb-b72v8" in "kube-system" namespace has status "Ready":"True"
	I0223 15:18:42.974885   35807 pod_ready.go:81] duration metric: took 4.01418655s waiting for pod "coredns-787d4945fb-b72v8" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:42.974896   35807 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-k7q5k" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:42.983612   35807 pod_ready.go:92] pod "coredns-787d4945fb-k7q5k" in "kube-system" namespace has status "Ready":"True"
	I0223 15:18:42.983635   35807 pod_ready.go:81] duration metric: took 8.727002ms waiting for pod "coredns-787d4945fb-k7q5k" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:42.983651   35807 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:42.990449   35807 pod_ready.go:92] pod "etcd-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:18:42.990460   35807 pod_ready.go:81] duration metric: took 6.799062ms waiting for pod "etcd-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:42.990468   35807 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:42.997225   35807 pod_ready.go:92] pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:18:42.997235   35807 pod_ready.go:81] duration metric: took 6.762122ms waiting for pod "kube-apiserver-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:42.997242   35807 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:43.002278   35807 pod_ready.go:92] pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:18:43.002293   35807 pod_ready.go:81] duration metric: took 5.044715ms waiting for pod "kube-controller-manager-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:43.002307   35807 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8kn8s" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:43.370461   35807 pod_ready.go:92] pod "kube-proxy-8kn8s" in "kube-system" namespace has status "Ready":"True"
	I0223 15:18:43.370476   35807 pod_ready.go:81] duration metric: took 368.147374ms waiting for pod "kube-proxy-8kn8s" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:43.370489   35807 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:43.770227   35807 pod_ready.go:92] pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace has status "Ready":"True"
	I0223 15:18:43.770238   35807 pod_ready.go:81] duration metric: took 399.727904ms waiting for pod "kube-scheduler-embed-certs-057000" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:43.770245   35807 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace to be "Ready" ...
	I0223 15:18:46.176465   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:48.176530   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:50.680825   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:53.178373   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:55.677309   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:18:57.677773   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:00.180338   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:02.677481   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:04.677641   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:06.679478   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:09.176626   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:11.178372   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:13.180304   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:15.678595   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:18.178618   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:20.179466   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:22.679068   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:24.680375   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:27.179277   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:29.679003   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:32.179741   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:34.679155   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:37.181577   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:39.678087   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:41.678388   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:43.679366   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:46.177407   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:48.178492   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:50.179140   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:52.179946   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:54.678761   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:56.678856   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:19:58.679777   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:01.179893   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:03.180057   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:05.181830   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:07.679584   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:10.181566   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:12.680318   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:14.680724   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:17.182552   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:19.679388   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:22.180270   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:24.182091   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:26.679725   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:28.680327   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:30.680477   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:33.180967   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:35.182637   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:37.680639   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:40.182527   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:42.680618   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:44.681587   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:47.182204   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:49.680888   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:52.181430   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:54.680604   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:56.680744   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:20:58.680933   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:00.681611   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:03.180546   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:05.681700   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:08.180915   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:10.681038   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:12.682975   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:15.181783   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:17.681063   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:19.682163   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:22.180958   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:24.183469   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:26.681392   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:28.681818   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:30.681980   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:32.682492   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:35.182961   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:37.183315   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:39.681625   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:42.181244   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:44.182470   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:46.682186   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:49.182670   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:51.182740   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:53.183028   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:55.183927   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:21:57.682390   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:00.182802   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:02.183181   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:04.683128   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:07.184556   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:09.681742   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:11.683070   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:14.184022   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:16.682401   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:19.183733   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:21.682710   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:23.682776   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:26.184771   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:28.683319   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:31.185488   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:33.681700   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:35.683214   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:37.684520   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:40.185068   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:42.683828   35807 pod_ready.go:102] pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace has status "Ready":"False"
	I0223 15:22:44.190380   35807 pod_ready.go:81] duration metric: took 4m0.414145583s waiting for pod "metrics-server-7997d45854-xzzrx" in "kube-system" namespace to be "Ready" ...
	E0223 15:22:44.190392   35807 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0223 15:22:44.190398   35807 pod_ready.go:38] duration metric: took 4m5.231315304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 15:22:44.190413   35807 api_server.go:51] waiting for apiserver process to appear ...
	I0223 15:22:44.190497   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:22:44.209918   35807 logs.go:277] 1 containers: [4db94e1b8879]
	I0223 15:22:44.210005   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:22:44.228485   35807 logs.go:277] 1 containers: [c2c300d49ad0]
	I0223 15:22:44.228569   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:22:44.248153   35807 logs.go:277] 1 containers: [a223868c6b70]
	I0223 15:22:44.248236   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:22:44.268919   35807 logs.go:277] 1 containers: [0401d5c7d3e1]
	I0223 15:22:44.269004   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:22:44.288363   35807 logs.go:277] 1 containers: [0a6c90a99924]
	I0223 15:22:44.288451   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:22:44.308751   35807 logs.go:277] 1 containers: [4683c367fcf2]
	I0223 15:22:44.308846   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:22:44.328608   35807 logs.go:277] 0 containers: []
	W0223 15:22:44.328622   35807 logs.go:279] No container was found matching "kindnet"
	I0223 15:22:44.328695   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 15:22:44.348828   35807 logs.go:277] 1 containers: [8e347dd97ccc]
	I0223 15:22:44.348908   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:22:44.368349   35807 logs.go:277] 1 containers: [b10a78d12da2]
	I0223 15:22:44.368369   35807 logs.go:123] Gathering logs for kube-apiserver [4db94e1b8879] ...
	I0223 15:22:44.368379   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db94e1b8879"
	I0223 15:22:44.395879   35807 logs.go:123] Gathering logs for coredns [a223868c6b70] ...
	I0223 15:22:44.395895   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a223868c6b70"
	I0223 15:22:44.417394   35807 logs.go:123] Gathering logs for kube-scheduler [0401d5c7d3e1] ...
	I0223 15:22:44.417410   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0401d5c7d3e1"
	I0223 15:22:44.445121   35807 logs.go:123] Gathering logs for kube-controller-manager [4683c367fcf2] ...
	I0223 15:22:44.445138   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4683c367fcf2"
	I0223 15:22:44.479923   35807 logs.go:123] Gathering logs for dmesg ...
	I0223 15:22:44.479938   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:22:44.492335   35807 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:22:44.492349   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 15:22:44.576168   35807 logs.go:123] Gathering logs for etcd [c2c300d49ad0] ...
	I0223 15:22:44.576183   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c300d49ad0"
	I0223 15:22:44.602378   35807 logs.go:123] Gathering logs for kube-proxy [0a6c90a99924] ...
	I0223 15:22:44.602392   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a6c90a99924"
	I0223 15:22:44.624416   35807 logs.go:123] Gathering logs for storage-provisioner [8e347dd97ccc] ...
	I0223 15:22:44.624431   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e347dd97ccc"
	I0223 15:22:44.645312   35807 logs.go:123] Gathering logs for kubernetes-dashboard [b10a78d12da2] ...
	I0223 15:22:44.645330   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10a78d12da2"
	I0223 15:22:44.667775   35807 logs.go:123] Gathering logs for Docker ...
	I0223 15:22:44.667792   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:22:44.694619   35807 logs.go:123] Gathering logs for container status ...
	I0223 15:22:44.694635   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:22:44.723882   35807 logs.go:123] Gathering logs for kubelet ...
	I0223 15:22:44.723898   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:22:47.297969   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 15:22:47.309076   35807 api_server.go:71] duration metric: took 4m9.75588993s to wait for apiserver process to appear ...
	I0223 15:22:47.309090   35807 api_server.go:87] waiting for apiserver healthz status ...
	I0223 15:22:47.309175   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:22:47.332439   35807 logs.go:277] 1 containers: [4db94e1b8879]
	I0223 15:22:47.332528   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:22:47.352667   35807 logs.go:277] 1 containers: [c2c300d49ad0]
	I0223 15:22:47.352751   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:22:47.374199   35807 logs.go:277] 1 containers: [a223868c6b70]
	I0223 15:22:47.374281   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:22:47.394100   35807 logs.go:277] 1 containers: [0401d5c7d3e1]
	I0223 15:22:47.394181   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:22:47.413802   35807 logs.go:277] 1 containers: [0a6c90a99924]
	I0223 15:22:47.413886   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:22:47.433825   35807 logs.go:277] 1 containers: [4683c367fcf2]
	I0223 15:22:47.433925   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:22:47.453982   35807 logs.go:277] 0 containers: []
	W0223 15:22:47.453995   35807 logs.go:279] No container was found matching "kindnet"
	I0223 15:22:47.454063   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 15:22:47.473718   35807 logs.go:277] 1 containers: [8e347dd97ccc]
	I0223 15:22:47.473796   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:22:47.493867   35807 logs.go:277] 1 containers: [b10a78d12da2]
	I0223 15:22:47.493891   35807 logs.go:123] Gathering logs for etcd [c2c300d49ad0] ...
	I0223 15:22:47.493900   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c300d49ad0"
	I0223 15:22:47.520481   35807 logs.go:123] Gathering logs for coredns [a223868c6b70] ...
	I0223 15:22:47.520497   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a223868c6b70"
	I0223 15:22:47.540819   35807 logs.go:123] Gathering logs for kube-scheduler [0401d5c7d3e1] ...
	I0223 15:22:47.540836   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0401d5c7d3e1"
	I0223 15:22:47.568380   35807 logs.go:123] Gathering logs for kube-proxy [0a6c90a99924] ...
	I0223 15:22:47.568395   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a6c90a99924"
	I0223 15:22:47.590482   35807 logs.go:123] Gathering logs for storage-provisioner [8e347dd97ccc] ...
	I0223 15:22:47.590499   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e347dd97ccc"
	I0223 15:22:47.612028   35807 logs.go:123] Gathering logs for dmesg ...
	I0223 15:22:47.612044   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:22:47.623970   35807 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:22:47.623984   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 15:22:47.707448   35807 logs.go:123] Gathering logs for kube-apiserver [4db94e1b8879] ...
	I0223 15:22:47.707464   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db94e1b8879"
	I0223 15:22:47.733769   35807 logs.go:123] Gathering logs for Docker ...
	I0223 15:22:47.733784   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:22:47.760338   35807 logs.go:123] Gathering logs for container status ...
	I0223 15:22:47.760353   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:22:47.790325   35807 logs.go:123] Gathering logs for kubelet ...
	I0223 15:22:47.790341   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:22:47.866993   35807 logs.go:123] Gathering logs for kube-controller-manager [4683c367fcf2] ...
	I0223 15:22:47.867009   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4683c367fcf2"
	I0223 15:22:47.900391   35807 logs.go:123] Gathering logs for kubernetes-dashboard [b10a78d12da2] ...
	I0223 15:22:47.900407   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10a78d12da2"
	I0223 15:22:50.424683   35807 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63190/healthz ...
	I0223 15:22:50.432779   35807 api_server.go:278] https://127.0.0.1:63190/healthz returned 200:
	ok
	I0223 15:22:50.434045   35807 api_server.go:140] control plane version: v1.26.1
	I0223 15:22:50.434054   35807 api_server.go:130] duration metric: took 3.1248812s to wait for apiserver health ...
	I0223 15:22:50.434059   35807 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 15:22:50.434133   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 15:22:50.454489   35807 logs.go:277] 1 containers: [4db94e1b8879]
	I0223 15:22:50.454579   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 15:22:50.474085   35807 logs.go:277] 1 containers: [c2c300d49ad0]
	I0223 15:22:50.474169   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 15:22:50.493325   35807 logs.go:277] 1 containers: [a223868c6b70]
	I0223 15:22:50.493404   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 15:22:50.512735   35807 logs.go:277] 1 containers: [0401d5c7d3e1]
	I0223 15:22:50.512826   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 15:22:50.532354   35807 logs.go:277] 1 containers: [0a6c90a99924]
	I0223 15:22:50.532440   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 15:22:50.551862   35807 logs.go:277] 1 containers: [4683c367fcf2]
	I0223 15:22:50.551951   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 15:22:50.571562   35807 logs.go:277] 0 containers: []
	W0223 15:22:50.571575   35807 logs.go:279] No container was found matching "kindnet"
	I0223 15:22:50.571643   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0223 15:22:50.590760   35807 logs.go:277] 1 containers: [b10a78d12da2]
	I0223 15:22:50.590847   35807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0223 15:22:50.609914   35807 logs.go:277] 1 containers: [8e347dd97ccc]
	I0223 15:22:50.609937   35807 logs.go:123] Gathering logs for Docker ...
	I0223 15:22:50.609944   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 15:22:50.638751   35807 logs.go:123] Gathering logs for container status ...
	I0223 15:22:50.638785   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 15:22:50.669625   35807 logs.go:123] Gathering logs for describe nodes ...
	I0223 15:22:50.669641   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0223 15:22:50.749760   35807 logs.go:123] Gathering logs for etcd [c2c300d49ad0] ...
	I0223 15:22:50.749776   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c300d49ad0"
	I0223 15:22:50.775829   35807 logs.go:123] Gathering logs for coredns [a223868c6b70] ...
	I0223 15:22:50.775846   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a223868c6b70"
	I0223 15:22:50.799238   35807 logs.go:123] Gathering logs for kube-scheduler [0401d5c7d3e1] ...
	I0223 15:22:50.799254   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0401d5c7d3e1"
	I0223 15:22:50.827021   35807 logs.go:123] Gathering logs for storage-provisioner [8e347dd97ccc] ...
	I0223 15:22:50.827037   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e347dd97ccc"
	I0223 15:22:50.850368   35807 logs.go:123] Gathering logs for kubernetes-dashboard [b10a78d12da2] ...
	I0223 15:22:50.850385   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10a78d12da2"
	I0223 15:22:50.873128   35807 logs.go:123] Gathering logs for kubelet ...
	I0223 15:22:50.873143   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 15:22:50.948405   35807 logs.go:123] Gathering logs for dmesg ...
	I0223 15:22:50.948420   35807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 15:22:50.960726   35807 logs.go:123] Gathering logs for kube-apiserver [4db94e1b8879] ...
	I0223 15:22:50.960742   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4db94e1b8879"
	I0223 15:22:50.987039   35807 logs.go:123] Gathering logs for kube-proxy [0a6c90a99924] ...
	I0223 15:22:50.987057   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a6c90a99924"
	I0223 15:22:51.009413   35807 logs.go:123] Gathering logs for kube-controller-manager [4683c367fcf2] ...
	I0223 15:22:51.009432   35807 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4683c367fcf2"
	I0223 15:22:53.552786   35807 system_pods.go:59] 8 kube-system pods found
	I0223 15:22:53.552801   35807 system_pods.go:61] "coredns-787d4945fb-b72v8" [b5b18f9e-3fec-4182-a188-f2155dcad6c8] Running
	I0223 15:22:53.552806   35807 system_pods.go:61] "etcd-embed-certs-057000" [1235518d-0efc-40d0-84e1-7f5329ed1b7c] Running
	I0223 15:22:53.552809   35807 system_pods.go:61] "kube-apiserver-embed-certs-057000" [886e04ee-174d-4e77-a6ba-97aa19f54cc8] Running
	I0223 15:22:53.552814   35807 system_pods.go:61] "kube-controller-manager-embed-certs-057000" [71640063-a0f0-42fc-ac0d-94a573137f55] Running
	I0223 15:22:53.552817   35807 system_pods.go:61] "kube-proxy-8kn8s" [16e3d828-d97e-4c5a-abd0-e06acaf5d2d8] Running
	I0223 15:22:53.552841   35807 system_pods.go:61] "kube-scheduler-embed-certs-057000" [f6aaa2f1-2914-45f6-9ac3-dd436464bfd2] Running
	I0223 15:22:53.552847   35807 system_pods.go:61] "metrics-server-7997d45854-xzzrx" [4a521958-e07e-41e2-821e-a9e79b2d385e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 15:22:53.552852   35807 system_pods.go:61] "storage-provisioner" [109d9299-6947-4832-ae4b-b8525b107935] Running
	I0223 15:22:53.552856   35807 system_pods.go:74] duration metric: took 3.118716172s to wait for pod list to return data ...
	I0223 15:22:53.552862   35807 default_sa.go:34] waiting for default service account to be created ...
	I0223 15:22:53.555497   35807 default_sa.go:45] found service account: "default"
	I0223 15:22:53.555506   35807 default_sa.go:55] duration metric: took 2.639979ms for default service account to be created ...
	I0223 15:22:53.555511   35807 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 15:22:53.559758   35807 system_pods.go:86] 8 kube-system pods found
	I0223 15:22:53.559770   35807 system_pods.go:89] "coredns-787d4945fb-b72v8" [b5b18f9e-3fec-4182-a188-f2155dcad6c8] Running
	I0223 15:22:53.559775   35807 system_pods.go:89] "etcd-embed-certs-057000" [1235518d-0efc-40d0-84e1-7f5329ed1b7c] Running
	I0223 15:22:53.559778   35807 system_pods.go:89] "kube-apiserver-embed-certs-057000" [886e04ee-174d-4e77-a6ba-97aa19f54cc8] Running
	I0223 15:22:53.559782   35807 system_pods.go:89] "kube-controller-manager-embed-certs-057000" [71640063-a0f0-42fc-ac0d-94a573137f55] Running
	I0223 15:22:53.559785   35807 system_pods.go:89] "kube-proxy-8kn8s" [16e3d828-d97e-4c5a-abd0-e06acaf5d2d8] Running
	I0223 15:22:53.559789   35807 system_pods.go:89] "kube-scheduler-embed-certs-057000" [f6aaa2f1-2914-45f6-9ac3-dd436464bfd2] Running
	I0223 15:22:53.559794   35807 system_pods.go:89] "metrics-server-7997d45854-xzzrx" [4a521958-e07e-41e2-821e-a9e79b2d385e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0223 15:22:53.559798   35807 system_pods.go:89] "storage-provisioner" [109d9299-6947-4832-ae4b-b8525b107935] Running
	I0223 15:22:53.559802   35807 system_pods.go:126] duration metric: took 4.287848ms to wait for k8s-apps to be running ...
	I0223 15:22:53.559807   35807 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 15:22:53.559861   35807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 15:22:53.569961   35807 system_svc.go:56] duration metric: took 10.149243ms WaitForService to wait for kubelet.
	I0223 15:22:53.569975   35807 kubeadm.go:578] duration metric: took 4m16.016636655s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 15:22:53.569991   35807 node_conditions.go:102] verifying NodePressure condition ...
	I0223 15:22:53.572927   35807 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0223 15:22:53.572937   35807 node_conditions.go:123] node cpu capacity is 6
	I0223 15:22:53.572945   35807 node_conditions.go:105] duration metric: took 2.949841ms to run NodePressure ...
	I0223 15:22:53.572953   35807 start.go:228] waiting for startup goroutines ...
	I0223 15:22:53.572958   35807 start.go:233] waiting for cluster config update ...
	I0223 15:22:53.572968   35807 start.go:242] writing updated cluster config ...
	I0223 15:22:53.573298   35807 ssh_runner.go:195] Run: rm -f paused
	I0223 15:22:53.611317   35807 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 15:22:53.633066   35807 out.go:177] * Done! kubectl is now configured to use "embed-certs-057000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 22:58:31 UTC, end at Thu 2023-02-23 23:25:32 UTC. --
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361306771Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361678292Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.361745336Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363177871Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363230712Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363257014Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363271466Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363298828Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363323522Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363351730Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363374761Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363404400Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363559257Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363684527Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.363723678Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.364099630Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.372604384Z" level=info msg="Loading containers: start."
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.450240223Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.483107583Z" level=info msg="Loading containers: done."
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.491003119Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.491073591Z" level=info msg="Daemon has completed initialization"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.511892551Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 22:58:34 old-k8s-version-919000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.515536521Z" level=info msg="API listen on [::]:2376"
	Feb 23 22:58:34 old-k8s-version-919000 dockerd[637]: time="2023-02-23T22:58:34.520614220Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-02-23T23:25:34Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:25:35 up  2:54,  0 users,  load average: 0.18, 0.38, 0.72
	Linux old-k8s-version-919000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 22:58:31 UTC, end at Thu 2023-02-23 23:25:35 UTC. --
	Feb 23 23:25:33 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 23:25:34 old-k8s-version-919000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Feb 23 23:25:34 old-k8s-version-919000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 23:25:34 old-k8s-version-919000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 23:25:34 old-k8s-version-919000 kubelet[33896]: I0223 23:25:34.575315   33896 server.go:410] Version: v1.16.0
	Feb 23 23:25:34 old-k8s-version-919000 kubelet[33896]: I0223 23:25:34.575517   33896 plugins.go:100] No cloud provider specified.
	Feb 23 23:25:34 old-k8s-version-919000 kubelet[33896]: I0223 23:25:34.575527   33896 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 23:25:34 old-k8s-version-919000 kubelet[33896]: I0223 23:25:34.577216   33896 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 23:25:34 old-k8s-version-919000 kubelet[33896]: W0223 23:25:34.577868   33896 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 23:25:34 old-k8s-version-919000 kubelet[33896]: W0223 23:25:34.577938   33896 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 23:25:34 old-k8s-version-919000 kubelet[33896]: F0223 23:25:34.577993   33896 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 23:25:34 old-k8s-version-919000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 23:25:34 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 23 23:25:35 old-k8s-version-919000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Feb 23 23:25:35 old-k8s-version-919000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 23 23:25:35 old-k8s-version-919000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 23 23:25:35 old-k8s-version-919000 kubelet[33934]: I0223 23:25:35.324533   33934 server.go:410] Version: v1.16.0
	Feb 23 23:25:35 old-k8s-version-919000 kubelet[33934]: I0223 23:25:35.324747   33934 plugins.go:100] No cloud provider specified.
	Feb 23 23:25:35 old-k8s-version-919000 kubelet[33934]: I0223 23:25:35.324756   33934 server.go:773] Client rotation is on, will bootstrap in background
	Feb 23 23:25:35 old-k8s-version-919000 kubelet[33934]: I0223 23:25:35.326465   33934 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 23 23:25:35 old-k8s-version-919000 kubelet[33934]: W0223 23:25:35.327166   33934 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 23 23:25:35 old-k8s-version-919000 kubelet[33934]: W0223 23:25:35.327234   33934 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 23 23:25:35 old-k8s-version-919000 kubelet[33934]: F0223 23:25:35.327258   33934 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 23 23:25:35 old-k8s-version-919000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 23 23:25:35 old-k8s-version-919000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 15:25:35.072447   36766 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 2 (391.598325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-919000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.71s)

                                                
                                    

Test pass (272/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21.84
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.26.1/json-events 14.04
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.66
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
18 TestDownloadOnlyKic 2.16
19 TestBinaryMirror 1.67
20 TestOffline 49.96
22 TestAddons/Setup 146.49
26 TestAddons/parallel/MetricsServer 5.59
27 TestAddons/parallel/HelmTiller 12.79
29 TestAddons/parallel/CSI 52.57
30 TestAddons/parallel/Headlamp 11.1
31 TestAddons/parallel/CloudSpanner 5.53
34 TestAddons/serial/GCPAuth/Namespaces 0.1
35 TestAddons/StoppedEnableDisable 11.42
36 TestCertOptions 33.46
37 TestCertExpiration 240.42
38 TestDockerFlags 39.52
39 TestForceSystemdFlag 33.19
40 TestForceSystemdEnv 36.99
42 TestHyperKitDriverInstallOrUpdate 11.55
45 TestErrorSpam/setup 27.49
46 TestErrorSpam/start 2.41
47 TestErrorSpam/status 1.24
48 TestErrorSpam/pause 1.74
49 TestErrorSpam/unpause 1.81
50 TestErrorSpam/stop 11.52
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 42.9
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 45.9
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.09
62 TestFunctional/serial/CacheCmd/cache/add_local 1.65
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.82
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.51
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.8
70 TestFunctional/serial/ExtraConfig 44.12
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 3.06
73 TestFunctional/serial/LogsFileCmd 3.1
75 TestFunctional/parallel/ConfigCmd 0.44
76 TestFunctional/parallel/DashboardCmd 12.92
77 TestFunctional/parallel/DryRun 1.57
78 TestFunctional/parallel/InternationalLanguage 0.69
79 TestFunctional/parallel/StatusCmd 1.55
84 TestFunctional/parallel/AddonsCmd 0.24
85 TestFunctional/parallel/PersistentVolumeClaim 25.81
87 TestFunctional/parallel/SSHCmd 0.79
88 TestFunctional/parallel/CpCmd 2.09
89 TestFunctional/parallel/MySQL 24.43
90 TestFunctional/parallel/FileSync 0.45
91 TestFunctional/parallel/CertSync 2.53
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
99 TestFunctional/parallel/License 0.75
100 TestFunctional/parallel/Version/short 0.23
101 TestFunctional/parallel/Version/components 1.05
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
106 TestFunctional/parallel/ImageCommands/ImageBuild 8.91
107 TestFunctional/parallel/ImageCommands/Setup 2.76
108 TestFunctional/parallel/DockerEnv/bash 1.83
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.39
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.66
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.58
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.91
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.98
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.74
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.47
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
123 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 0.62
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
131 TestFunctional/parallel/ProfileCmd/profile_list 0.57
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
133 TestFunctional/parallel/MountCmd/any-port 10.75
134 TestFunctional/parallel/MountCmd/specific-port 2.26
135 TestFunctional/delete_addon-resizer_images 0.15
136 TestFunctional/delete_my-image_image 0.06
137 TestFunctional/delete_minikube_cached_images 0.06
141 TestImageBuild/serial/NormalBuild 2.33
142 TestImageBuild/serial/BuildWithBuildArg 0.94
143 TestImageBuild/serial/BuildWithDockerIgnore 0.47
144 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.4
154 TestJSONOutput/start/Command 52.06
155 TestJSONOutput/start/Audit 0
157 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/pause/Command 0.62
161 TestJSONOutput/pause/Audit 0
163 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/unpause/Command 0.6
167 TestJSONOutput/unpause/Audit 0
169 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/stop/Command 5.75
173 TestJSONOutput/stop/Audit 0
175 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
177 TestErrorJSONOutput 0.74
179 TestKicCustomNetwork/create_custom_network 31.51
180 TestKicCustomNetwork/use_default_bridge_network 29.29
181 TestKicExistingNetwork 29.4
182 TestKicCustomSubnet 30.35
183 TestKicStaticIP 31.35
184 TestMainNoArgs 0.07
185 TestMinikubeProfile 63.08
188 TestMountStart/serial/StartWithMountFirst 8.09
189 TestMountStart/serial/VerifyMountFirst 0.4
190 TestMountStart/serial/StartWithMountSecond 8.01
191 TestMountStart/serial/VerifyMountSecond 0.4
192 TestMountStart/serial/DeleteFirst 2.12
193 TestMountStart/serial/VerifyMountPostDelete 0.39
194 TestMountStart/serial/Stop 1.59
195 TestMountStart/serial/RestartStopped 6.07
196 TestMountStart/serial/VerifyMountPostStop 0.4
199 TestMultiNode/serial/FreshStart2Nodes 76.69
202 TestMultiNode/serial/AddNode 22.1
203 TestMultiNode/serial/ProfileList 0.44
204 TestMultiNode/serial/CopyFile 14.59
205 TestMultiNode/serial/StopNode 3
206 TestMultiNode/serial/StartAfterStop 10.14
207 TestMultiNode/serial/RestartKeepsNodes 87.19
208 TestMultiNode/serial/DeleteNode 6.13
209 TestMultiNode/serial/StopMultiNode 21.85
210 TestMultiNode/serial/RestartMultiNode 71.29
211 TestMultiNode/serial/ValidateNameConflict 32.88
215 TestPreload 135.77
217 TestScheduledStopUnix 103.07
218 TestSkaffold 69.82
220 TestInsufficientStorage 14.19
236 TestStoppedBinaryUpgrade/Setup 3.37
238 TestStoppedBinaryUpgrade/MinikubeLogs 3.45
247 TestPause/serial/Start 48.99
248 TestPause/serial/SecondStartNoReconfiguration 42.81
249 TestPause/serial/Pause 0.91
250 TestPause/serial/VerifyStatus 0.51
251 TestPause/serial/Unpause 0.66
252 TestPause/serial/PauseAgain 0.99
253 TestPause/serial/DeletePaused 2.76
254 TestPause/serial/VerifyDeletedResources 0.57
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.72
257 TestNoKubernetes/serial/StartWithK8s 33.05
258 TestNoKubernetes/serial/StartWithStopK8s 17.58
259 TestNoKubernetes/serial/Start 7.29
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
261 TestNoKubernetes/serial/ProfileList 16.15
262 TestNoKubernetes/serial/Stop 1.6
263 TestNoKubernetes/serial/StartNoArgs 4.93
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.42
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 18.5
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 20.1
267 TestNetworkPlugins/group/auto/Start 44.32
268 TestNetworkPlugins/group/auto/KubeletFlags 0.4
269 TestNetworkPlugins/group/auto/NetCatPod 12.24
270 TestNetworkPlugins/group/auto/DNS 0.13
271 TestNetworkPlugins/group/auto/Localhost 0.1
272 TestNetworkPlugins/group/auto/HairPin 0.12
273 TestNetworkPlugins/group/kindnet/Start 65.64
274 TestNetworkPlugins/group/calico/Start 73.36
275 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
276 TestNetworkPlugins/group/kindnet/KubeletFlags 0.45
277 TestNetworkPlugins/group/kindnet/NetCatPod 12.21
278 TestNetworkPlugins/group/kindnet/DNS 0.13
279 TestNetworkPlugins/group/kindnet/Localhost 0.12
280 TestNetworkPlugins/group/kindnet/HairPin 0.12
281 TestNetworkPlugins/group/calico/ControllerPod 5.02
282 TestNetworkPlugins/group/calico/KubeletFlags 0.42
283 TestNetworkPlugins/group/calico/NetCatPod 13.27
284 TestNetworkPlugins/group/calico/DNS 0.13
285 TestNetworkPlugins/group/custom-flannel/Start 73.26
286 TestNetworkPlugins/group/calico/Localhost 0.11
287 TestNetworkPlugins/group/calico/HairPin 0.12
288 TestNetworkPlugins/group/false/Start 44.49
289 TestNetworkPlugins/group/false/KubeletFlags 0.47
290 TestNetworkPlugins/group/false/NetCatPod 12.21
291 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
292 TestNetworkPlugins/group/custom-flannel/NetCatPod 17.22
293 TestNetworkPlugins/group/false/DNS 0.13
294 TestNetworkPlugins/group/false/Localhost 0.12
295 TestNetworkPlugins/group/false/HairPin 0.1
296 TestNetworkPlugins/group/custom-flannel/DNS 0.14
297 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
298 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
299 TestNetworkPlugins/group/enable-default-cni/Start 48.36
300 TestNetworkPlugins/group/flannel/Start 57.13
301 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
302 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.18
303 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
304 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
305 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
306 TestNetworkPlugins/group/flannel/ControllerPod 5.02
307 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
308 TestNetworkPlugins/group/flannel/NetCatPod 12.2
309 TestNetworkPlugins/group/bridge/Start 44.09
310 TestNetworkPlugins/group/flannel/DNS 0.13
311 TestNetworkPlugins/group/flannel/Localhost 0.12
312 TestNetworkPlugins/group/flannel/HairPin 0.12
313 TestNetworkPlugins/group/kubenet/Start 57.88
314 TestNetworkPlugins/group/bridge/KubeletFlags 0.47
315 TestNetworkPlugins/group/bridge/NetCatPod 11.19
316 TestNetworkPlugins/group/bridge/DNS 0.12
317 TestNetworkPlugins/group/bridge/Localhost 0.11
318 TestNetworkPlugins/group/bridge/HairPin 0.13
321 TestNetworkPlugins/group/kubenet/KubeletFlags 0.45
322 TestNetworkPlugins/group/kubenet/NetCatPod 12.25
323 TestNetworkPlugins/group/kubenet/DNS 0.12
324 TestNetworkPlugins/group/kubenet/Localhost 0.12
325 TestNetworkPlugins/group/kubenet/HairPin 0.11
327 TestStartStop/group/no-preload/serial/FirstStart 76.68
328 TestStartStop/group/no-preload/serial/DeployApp 8.27
329 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
330 TestStartStop/group/no-preload/serial/Stop 11.03
331 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
332 TestStartStop/group/no-preload/serial/SecondStart 557.69
335 TestStartStop/group/old-k8s-version/serial/Stop 1.58
336 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.37
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.43
341 TestStartStop/group/no-preload/serial/Pause 3.13
343 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.82
344 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.92
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.37
348 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 307.56
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.02
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.43
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.14
355 TestStartStop/group/newest-cni/serial/FirstStart 42.21
356 TestStartStop/group/newest-cni/serial/DeployApp 0
357 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.89
358 TestStartStop/group/newest-cni/serial/Stop 5.84
359 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
360 TestStartStop/group/newest-cni/serial/SecondStart 24.48
361 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.43
364 TestStartStop/group/newest-cni/serial/Pause 3.1
366 TestStartStop/group/embed-certs/serial/FirstStart 46
367 TestStartStop/group/embed-certs/serial/DeployApp 10.27
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.83
369 TestStartStop/group/embed-certs/serial/Stop 10.98
370 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.38
371 TestStartStop/group/embed-certs/serial/SecondStart 554.64
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
374 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
376 TestStartStop/group/embed-certs/serial/Pause 3.1
x
+
TestDownloadOnly/v1.16.0/json-events (21.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-828000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-828000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (21.838811993s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (21.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-828000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-828000: exit status 85 (285.633116ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-828000 | jenkins | v1.29.0 | 23 Feb 23 13:58 PST |          |
	|         | -p download-only-828000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 13:58:31
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 13:58:31.855386   15212 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:58:31.856094   15212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:58:31.856122   15212 out.go:309] Setting ErrFile to fd 2...
	I0223 13:58:31.856134   15212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:58:31.856381   15212 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	W0223 13:58:31.856808   15212 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-14738/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-14738/.minikube/config/config.json: no such file or directory
	I0223 13:58:31.858438   15212 out.go:303] Setting JSON to true
	I0223 13:58:31.877315   15212 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5286,"bootTime":1677184225,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 13:58:31.877420   15212 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:58:31.900106   15212 out.go:97] [download-only-828000] minikube v1.29.0 on Darwin 13.2
	W0223 13:58:31.900360   15212 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball: no such file or directory
	I0223 13:58:31.900400   15212 notify.go:220] Checking for updates...
	I0223 13:58:31.921624   15212 out.go:169] MINIKUBE_LOCATION=15909
	I0223 13:58:31.942704   15212 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 13:58:31.964706   15212 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:58:31.986789   15212 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:58:32.008707   15212 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	W0223 13:58:32.051278   15212 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 13:58:32.051571   15212 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:58:32.111340   15212 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:58:32.111451   15212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:58:32.253854   15212 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-02-23 21:58:32.160127562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:58:32.275541   15212 out.go:97] Using the docker driver based on user configuration
	I0223 13:58:32.275691   15212 start.go:296] selected driver: docker
	I0223 13:58:32.275708   15212 start.go:857] validating driver "docker" against <nil>
	I0223 13:58:32.275917   15212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:58:32.421996   15212 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-02-23 21:58:32.326391166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:58:32.422120   15212 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:58:32.424496   15212 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0223 13:58:32.424632   15212 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 13:58:32.446627   15212 out.go:169] Using Docker Desktop driver with root privileges
	I0223 13:58:32.468528   15212 cni.go:84] Creating CNI manager for ""
	I0223 13:58:32.468569   15212 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 13:58:32.468582   15212 start_flags.go:319] config:
	{Name:download-only-828000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-828000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:58:32.490512   15212 out.go:97] Starting control plane node download-only-828000 in cluster download-only-828000
	I0223 13:58:32.490558   15212 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:58:32.512314   15212 out.go:97] Pulling base image ...
	I0223 13:58:32.512353   15212 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 13:58:32.512432   15212 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:58:32.566185   15212 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 13:58:32.566435   15212 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0223 13:58:32.566568   15212 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 13:58:32.610948   15212 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 13:58:32.610985   15212 cache.go:57] Caching tarball of preloaded images
	I0223 13:58:32.611333   15212 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 13:58:32.636183   15212 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0223 13:58:32.636238   15212 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 13:58:32.842640   15212 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 13:58:44.551566   15212 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 13:58:44.551706   15212 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 13:58:45.097643   15212 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 13:58:45.097908   15212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/download-only-828000/config.json ...
	I0223 13:58:45.097940   15212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/download-only-828000/config.json: {Name:mk704fce3540545b70628d1fd5c4ef822b413dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:58:45.098198   15212 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 13:58:45.098445   15212 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-828000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (14.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-828000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-828000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (14.038566564s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (14.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-828000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-828000: exit status 85 (287.597453ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-828000 | jenkins | v1.29.0 | 23 Feb 23 13:58 PST |          |
	|         | -p download-only-828000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-828000 | jenkins | v1.29.0 | 23 Feb 23 13:58 PST |          |
	|         | -p download-only-828000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 13:58:54
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 13:58:54.054895   15262 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:58:54.055076   15262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:58:54.055082   15262 out.go:309] Setting ErrFile to fd 2...
	I0223 13:58:54.055086   15262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:58:54.055191   15262 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	W0223 13:58:54.055285   15262 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-14738/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-14738/.minikube/config/config.json: no such file or directory
	I0223 13:58:54.056420   15262 out.go:303] Setting JSON to true
	I0223 13:58:54.074654   15262 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5309,"bootTime":1677184225,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 13:58:54.074738   15262 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:58:54.097344   15262 out.go:97] [download-only-828000] minikube v1.29.0 on Darwin 13.2
	I0223 13:58:54.097541   15262 notify.go:220] Checking for updates...
	I0223 13:58:54.119317   15262 out.go:169] MINIKUBE_LOCATION=15909
	I0223 13:58:54.161132   15262 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 13:58:54.182550   15262 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:58:54.204201   15262 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:58:54.225261   15262 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	W0223 13:58:54.266989   15262 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 13:58:54.267752   15262 config.go:182] Loaded profile config "download-only-828000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0223 13:58:54.267844   15262 start.go:765] api.Load failed for download-only-828000: filestore "download-only-828000": Docker machine "download-only-828000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 13:58:54.267916   15262 driver.go:365] Setting default libvirt URI to qemu:///system
	W0223 13:58:54.267952   15262 start.go:765] api.Load failed for download-only-828000: filestore "download-only-828000": Docker machine "download-only-828000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 13:58:54.327958   15262 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:58:54.328060   15262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:58:54.470473   15262 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-02-23 21:58:54.376936732 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:58:54.491942   15262 out.go:97] Using the docker driver based on existing profile
	I0223 13:58:54.491987   15262 start.go:296] selected driver: docker
	I0223 13:58:54.492000   15262 start.go:857] validating driver "docker" against &{Name:download-only-828000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-828000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0223 13:58:54.492352   15262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:58:54.641399   15262 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-02-23 21:58:54.5438849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/
Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:58:54.643856   15262 cni.go:84] Creating CNI manager for ""
	I0223 13:58:54.643880   15262 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:58:54.643894   15262 start_flags.go:319] config:
	{Name:download-only-828000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-828000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:58:54.665429   15262 out.go:97] Starting control plane node download-only-828000 in cluster download-only-828000
	I0223 13:58:54.665459   15262 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:58:54.686204   15262 out.go:97] Pulling base image ...
	I0223 13:58:54.686302   15262 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:58:54.686380   15262 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:58:54.742891   15262 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 13:58:54.743118   15262 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0223 13:58:54.743142   15262 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory, skipping pull
	I0223 13:58:54.743148   15262 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in cache, skipping pull
	I0223 13:58:54.743164   15262 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc as a tarball
	I0223 13:58:54.767546   15262 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:58:54.767575   15262 cache.go:57] Caching tarball of preloaded images
	I0223 13:58:54.767905   15262 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:58:54.789610   15262 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0223 13:58:54.789726   15262 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0223 13:58:54.998248   15262 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:59:03.934724   15262 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0223 13:59:03.934932   15262 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-14738/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-828000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-828000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.16s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-262000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-262000
--- PASS: TestDownloadOnlyKic (2.16s)

                                                
                                    
x
+
TestBinaryMirror (1.67s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-556000 --alsologtostderr --binary-mirror http://127.0.0.1:57052 --driver=docker 
aaa_download_only_test.go:308: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-556000 --alsologtostderr --binary-mirror http://127.0.0.1:57052 --driver=docker : (1.054485297s)
helpers_test.go:175: Cleaning up "binary-mirror-556000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-556000
--- PASS: TestBinaryMirror (1.67s)

                                                
                                    
x
+
TestOffline (49.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-016000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-016000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (46.679355868s)
helpers_test.go:175: Cleaning up "offline-docker-016000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-016000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-016000: (3.280859073s)
--- PASS: TestOffline (49.96s)

                                                
                                    
x
+
TestAddons/Setup (146.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-034000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-034000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.492869516s)
--- PASS: TestAddons/Setup (146.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.721237ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-7cv2g" [1a07a442-6e9c-487d-8142-26ec36b0ab59] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008397365s
addons_test.go:380: (dbg) Run:  kubectl --context addons-034000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-034000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.653806ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-sbbvd" [2ee96588-0875-4c1a-b92c-6c4639d87a15] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008863471s
addons_test.go:438: (dbg) Run:  kubectl --context addons-034000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-034000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.291252794s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-034000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 4.659376ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-034000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-034000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1e9cd6cb-d372-4ab4-8f7f-247f414f24fc] Pending
helpers_test.go:344: "task-pv-pod" [1e9cd6cb-d372-4ab4-8f7f-247f414f24fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1e9cd6cb-d372-4ab4-8f7f-247f414f24fc] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.010072204s
addons_test.go:549: (dbg) Run:  kubectl --context addons-034000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-034000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-034000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-034000 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-034000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-034000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-034000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-034000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fa51faac-d041-4a9f-98e6-54e6b1b45940] Pending
helpers_test.go:344: "task-pv-pod-restore" [fa51faac-d041-4a9f-98e6-54e6b1b45940] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fa51faac-d041-4a9f-98e6-54e6b1b45940] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007205264s
addons_test.go:591: (dbg) Run:  kubectl --context addons-034000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-034000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-034000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-034000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-034000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.533782293s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-034000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-034000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-034000 --alsologtostderr -v=1: (2.050287391s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-kwg4l" [0aa2eea2-2c30-4e61-8c3b-7dbce155285e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-kwg4l" [0aa2eea2-2c30-4e61-8c3b-7dbce155285e] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.04880458s
--- PASS: TestAddons/parallel/Headlamp (11.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-szchp" [06f2dc4f-513c-4f6d-9bdb-13b9b1560500] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009721177s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-034000
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-034000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-034000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-034000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-034000: (10.996845461s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-034000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-034000
--- PASS: TestAddons/StoppedEnableDisable (11.42s)

                                                
                                    
x
+
TestCertOptions (33.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-497000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0223 14:45:09.269138   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-497000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (29.906552634s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-497000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-497000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-497000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-497000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-497000: (2.683963073s)
--- PASS: TestCertOptions (33.46s)

                                                
                                    
x
+
TestCertExpiration (240.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-912000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-912000 --memory=2048 --cert-expiration=3m --driver=docker : (28.023755326s)
E0223 14:43:22.388162   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-912000 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-912000 --memory=2048 --cert-expiration=8760h --driver=docker : (29.712633571s)
helpers_test.go:175: Cleaning up "cert-expiration-912000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-912000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-912000: (2.683772088s)
--- PASS: TestCertExpiration (240.42s)

                                                
                                    
x
+
TestDockerFlags (39.52s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-723000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-723000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (36.115757328s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-723000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-723000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-723000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-723000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-723000: (2.583277623s)
--- PASS: TestDockerFlags (39.52s)

                                                
                                    
x
+
TestForceSystemdFlag (33.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-212000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-212000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (30.08966241s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-212000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-212000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-212000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-212000: (2.675474966s)
--- PASS: TestForceSystemdFlag (33.19s)

                                                
                                    
x
+
TestForceSystemdEnv (36.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-256000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0223 14:42:06.102414   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-256000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (33.946210294s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-256000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-256000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-256000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-256000: (2.621033162s)
--- PASS: TestForceSystemdEnv (36.99s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.55s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0223 14:41:40.162252   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (11.55s)

                                                
                                    
x
+
TestErrorSpam/setup (27.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-833000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-833000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 --driver=docker : (27.486123899s)
--- PASS: TestErrorSpam/setup (27.49s)

                                                
                                    
x
+
TestErrorSpam/start (2.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 start --dry-run
--- PASS: TestErrorSpam/start (2.41s)

                                                
                                    
x
+
TestErrorSpam/status (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 status
--- PASS: TestErrorSpam/status (1.24s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (11.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 stop: (10.889253955s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-833000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-833000 stop
--- PASS: TestErrorSpam/stop (11.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /Users/jenkins/minikube-integration/15909-14738/.minikube/files/etc/test/nested/copy/15210/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-769000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2199: (dbg) Done: out/minikube-darwin-amd64 start -p functional-769000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (42.896233831s)
--- PASS: TestFunctional/serial/StartWithProxy (42.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-769000 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-darwin-amd64 start -p functional-769000 --alsologtostderr -v=8: (45.90311044s)
functional_test.go:657: soft start took 45.903652825s for "functional-769000" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-769000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 cache add k8s.gcr.io/pause:3.1: (2.787675328s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 cache add k8s.gcr.io/pause:3.3: (2.803431072s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 cache add k8s.gcr.io/pause:latest
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 cache add k8s.gcr.io/pause:latest: (2.49970708s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-769000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local3302350148/001
functional_test.go:1083: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 cache add minikube-local-cache-test:functional-769000
functional_test.go:1083: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 cache add minikube-local-cache-test:functional-769000: (1.120232537s)
functional_test.go:1088: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 cache delete minikube-local-cache-test:functional-769000
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-769000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-769000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (394.174782ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 cache reload
functional_test.go:1152: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 cache reload: (1.595210201s)
functional_test.go:1157: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 kubectl -- --context functional-769000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-769000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.80s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-769000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0223 14:06:40.078335   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:40.086007   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:40.097220   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:40.117609   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:40.158082   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:40.239808   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:40.400062   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:40.721236   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:41.361517   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:42.641776   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:45.202688   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:06:50.323540   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
functional_test.go:751: (dbg) Done: out/minikube-darwin-amd64 start -p functional-769000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.118242769s)
functional_test.go:755: restart took 44.11836012s for "functional-769000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-769000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 logs
functional_test.go:1230: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 logs: (3.062555156s)
--- PASS: TestFunctional/serial/LogsCmd (3.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1603022866/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd1603022866/001/logs.txt: (3.096777156s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-769000 config get cpus: exit status 14 (46.571986ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-769000 config get cpus: exit status 14 (73.49529ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-769000 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-769000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 17890: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-769000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:968: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-769000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (819.588486ms)

                                                
                                                
-- stdout --
	* [functional-769000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:08:06.565714   17811 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:08:06.565883   17811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:08:06.565889   17811 out.go:309] Setting ErrFile to fd 2...
	I0223 14:08:06.565893   17811 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:08:06.566002   17811 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:08:06.567255   17811 out.go:303] Setting JSON to false
	I0223 14:08:06.585843   17811 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5861,"bootTime":1677184225,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:08:06.585914   17811 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:08:06.608864   17811 out.go:177] * [functional-769000] minikube v1.29.0 on Darwin 13.2
	I0223 14:08:06.651424   17811 notify.go:220] Checking for updates...
	I0223 14:08:06.672432   17811 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:08:06.730167   17811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:08:06.751697   17811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:08:06.826256   17811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:08:06.884533   17811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:08:06.906351   17811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:08:06.928110   17811 config.go:182] Loaded profile config "functional-769000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:08:06.928774   17811 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:08:06.991498   17811 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:08:06.991636   17811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:08:07.133766   17811 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 22:08:07.042189221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:08:07.177251   17811 out.go:177] * Using the docker driver based on existing profile
	I0223 14:08:07.198373   17811 start.go:296] selected driver: docker
	I0223 14:08:07.198388   17811 start.go:857] validating driver "docker" against &{Name:functional-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-769000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:08:07.198485   17811 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:08:07.222331   17811 out.go:177] 
	W0223 14:08:07.243270   17811 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0223 14:08:07.280485   17811 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-769000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-769000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-769000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (687.746133ms)

                                                
                                                
-- stdout --
	* [functional-769000] minikube v1.29.0 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:08:08.130156   17850 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:08:08.130349   17850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:08:08.130354   17850 out.go:309] Setting ErrFile to fd 2...
	I0223 14:08:08.130359   17850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:08:08.130484   17850 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:08:08.132133   17850 out.go:303] Setting JSON to false
	I0223 14:08:08.152550   17850 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5863,"bootTime":1677184225,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0223 14:08:08.152630   17850 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 14:08:08.174692   17850 out.go:177] * [functional-769000] minikube v1.29.0 sur Darwin 13.2
	I0223 14:08:08.216592   17850 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 14:08:08.216599   17850 notify.go:220] Checking for updates...
	I0223 14:08:08.259718   17850 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	I0223 14:08:08.280647   17850 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 14:08:08.301894   17850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 14:08:08.343759   17850 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	I0223 14:08:08.390686   17850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 14:08:08.412110   17850 config.go:182] Loaded profile config "functional-769000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:08:08.413103   17850 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 14:08:08.475311   17850 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 14:08:08.475427   17850 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 14:08:08.624106   17850 info.go:266] docker info: {ID:RH4V:QTTE:6TLS:W74U:72T3:655A:HQAB:RRVU:2KDD:S3E6:3223:HKLC Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-02-23 22:08:08.527873045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 14:08:08.645352   17850 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0223 14:08:08.666158   17850 start.go:296] selected driver: docker
	I0223 14:08:08.666175   17850 start.go:857] validating driver "docker" against &{Name:functional-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-769000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 14:08:08.666263   17850 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 14:08:08.689925   17850 out.go:177] 
	W0223 14:08:08.711222   17850 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0223 14:08:08.731997   17850 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 status
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [798770bd-b902-43dd-8331-dde107768957] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010641819s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-769000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-769000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-769000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-769000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0d58ab80-1757-4467-bc6c-b102bfb24378] Pending
helpers_test.go:344: "sp-pod" [0d58ab80-1757-4467-bc6c-b102bfb24378] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0d58ab80-1757-4467-bc6c-b102bfb24378] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.009591771s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-769000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-769000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-769000 delete -f testdata/storage-provisioner/pod.yaml: (1.138947879s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-769000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [63f46ab8-d955-4726-ac03-1e2926257684] Pending
E0223 14:08:02.004363   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [63f46ab8-d955-4726-ac03-1e2926257684] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [63f46ab8-d955-4726-ac03-1e2926257684] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009855957s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-769000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh -n functional-769000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 cp functional-769000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd999346355/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh -n functional-769000 "sudo cat /home/docker/cp-test.txt"
E0223 14:07:00.563728   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-769000 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-25dnx" [563683e7-e36e-400c-a2f4-59e3336cbae0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-25dnx" [563683e7-e36e-400c-a2f4-59e3336cbae0] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.018894148s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-769000 exec mysql-888f84dd9-25dnx -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-769000 exec mysql-888f84dd9-25dnx -- mysql -ppassword -e "show databases;": exit status 1 (161.97395ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-769000 exec mysql-888f84dd9-25dnx -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-769000 exec mysql-888f84dd9-25dnx -- mysql -ppassword -e "show databases;": exit status 1 (135.043597ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-769000 exec mysql-888f84dd9-25dnx -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-769000 exec mysql-888f84dd9-25dnx -- mysql -ppassword -e "show databases;": exit status 1 (209.795727ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-769000 exec mysql-888f84dd9-25dnx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/15210/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo cat /etc/test/nested/copy/15210/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/15210.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo cat /etc/ssl/certs/15210.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/15210.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo cat /usr/share/ca-certificates/15210.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/152102.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo cat /etc/ssl/certs/152102.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/152102.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo cat /usr/share/ca-certificates/152102.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-769000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-769000 ssh "sudo systemctl is-active crio": exit status 1 (578.934735ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 version -o=json --components
functional_test.go:2235: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 version -o=json --components: (1.045004996s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-769000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-769000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-769000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls --format table
2023/02/23 14:08:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-769000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-769000 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-769000 | bc3e7b03fcc89 | 30B    |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-769000 | f555ede624431 | 1.24MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-769000 image ls --format json:
[{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"bc3e7b03fcc8991833a1f687168ad82521121c114d98190c425768959a460520","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-769000"],"size":"30"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4
f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboa
rd:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-769000"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"
id":"f555ede624431ebecba010bea9399f8cbd77ba0e0827dd4d95f9528805cd3391","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-769000"],"size":"1240000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-769000 image ls --format yaml:
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: bc3e7b03fcc8991833a1f687168ad82521121c114d98190c425768959a460520
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-769000
size: "30"
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-769000
size: "32900000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-769000 ssh pgrep buildkitd: exit status 1 (393.781824ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image build -t localhost/my-image:functional-769000 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 image build -t localhost/my-image:functional-769000 testdata/build: (8.215716422s)
functional_test.go:317: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-769000 image build -t localhost/my-image:functional-769000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in aef8cac290a2
Removing intermediate container aef8cac290a2
---> 48439d00807c
Step 3/3 : ADD content.txt /
---> f555ede62443
Successfully built f555ede62443
Successfully tagged localhost/my-image:functional-769000
functional_test.go:320: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-769000 image build -t localhost/my-image:functional-769000 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.697600579s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-769000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-769000 docker-env) && out/minikube-darwin-amd64 status -p functional-769000"
functional_test.go:493: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-769000 docker-env) && out/minikube-darwin-amd64 status -p functional-769000": (1.219539113s)
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-769000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image load --daemon gcr.io/google-containers/addon-resizer:functional-769000
functional_test.go:352: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 image load --daemon gcr.io/google-containers/addon-resizer:functional-769000: (3.361205018s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image load --daemon gcr.io/google-containers/addon-resizer:functional-769000
functional_test.go:362: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 image load --daemon gcr.io/google-containers/addon-resizer:functional-769000: (2.193696544s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.581342059s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-769000
functional_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image load --daemon gcr.io/google-containers/addon-resizer:functional-769000
functional_test.go:242: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 image load --daemon gcr.io/google-containers/addon-resizer:functional-769000: (3.887691011s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image save gcr.io/google-containers/addon-resizer:functional-769000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 image save gcr.io/google-containers/addon-resizer:functional-769000 /Users/jenkins/workspace/addon-resizer-save.tar: (1.976560372s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image rm gcr.io/google-containers/addon-resizer:functional-769000
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.420084853s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-769000
functional_test.go:421: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 image save --daemon gcr.io/google-containers/addon-resizer:functional-769000
E0223 14:07:21.044412   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
functional_test.go:421: (dbg) Done: out/minikube-darwin-amd64 -p functional-769000 image save --daemon gcr.io/google-containers/addon-resizer:functional-769000: (2.357431472s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-769000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-769000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-769000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [54cd2786-18b3-4ebf-89b1-517a5a0b1499] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [54cd2786-18b3-4ebf-89b1-517a5a0b1499] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.007397622s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 service list -o json
functional_test.go:1552: Took "622.226935ms" to run "out/minikube-darwin-amd64 -p functional-769000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-769000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-769000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 17469: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1312: Took "503.079462ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1326: Took "70.837495ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1363: Took "439.778306ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1376: Took "67.846688ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-769000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port926383815/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677190073524338000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port926383815/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677190073524338000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port926383815/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677190073524338000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port926383815/001/test-1677190073524338000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-769000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (423.154364ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 23 22:07 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 23 22:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 23 22:07 test-1677190073524338000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh cat /mount-9p/test-1677190073524338000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-769000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ca0f5c82-edc0-41c0-a123-fa3bd2ca3224] Pending
helpers_test.go:344: "busybox-mount" [ca0f5c82-edc0-41c0-a123-fa3bd2ca3224] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ca0f5c82-edc0-41c0-a123-fa3bd2ca3224] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ca0f5c82-edc0-41c0-a123-fa3bd2ca3224] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.01047973s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-769000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-769000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port926383815/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-769000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2552343304/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-769000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (383.771344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-769000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2552343304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-769000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-769000 ssh "sudo umount -f /mount-9p": exit status 1 (373.290353ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-769000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-769000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2552343304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-769000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-769000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-769000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-531000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-531000: (2.331454194s)
--- PASS: TestImageBuild/serial/NormalBuild (2.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-531000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-531000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-531000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.40s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.06s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-922000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0223 14:17:33.709384   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-922000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (52.061599372s)
--- PASS: TestJSONOutput/start/Command (52.06s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-922000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-922000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-922000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-922000 --output=json --user=testUser: (5.744839277s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.74s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-904000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-904000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (347.520296ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a47e2c41-2dee-4af4-a0e5-3250ebfb94eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-904000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c7382dc-8b5c-4165-916e-83a8b1da99bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"5996850b-8ef5-4a4c-89f5-03aed5cf0734","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig"}}
	{"specversion":"1.0","id":"80087db7-80e2-48d1-b359-c9b67515086c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"bffc017e-41a2-4f88-b32a-83913d6e6841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c9adf492-5a5d-469f-b55e-82ca7ba80adf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube"}}
	{"specversion":"1.0","id":"41153d53-b110-495d-bc0a-31980759d7c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d091342-60a1-4cde-bae0-fe7798876d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-904000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-904000
--- PASS: TestErrorJSONOutput (0.74s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-353000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-353000 --network=: (28.856639763s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-353000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-353000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-353000: (2.595959586s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.51s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-838000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-838000 --network=bridge: (26.821752637s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-838000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-838000: (2.410014243s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.29s)

                                                
                                    
x
+
TestKicExistingNetwork (29.4s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-507000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-507000 --network=existing-network: (26.774573478s)
helpers_test.go:175: Cleaning up "existing-network-507000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-507000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-507000: (2.218498677s)
--- PASS: TestKicExistingNetwork (29.40s)

                                                
                                    
x
+
TestKicCustomSubnet (30.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-362000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-362000 --subnet=192.168.60.0/24: (27.677728739s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-362000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-362000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-362000: (2.615872358s)
--- PASS: TestKicCustomSubnet (30.35s)

                                                
                                    
x
+
TestKicStaticIP (31.35s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-685000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-685000 --static-ip=192.168.200.200: (28.521495371s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-685000 ip
helpers_test.go:175: Cleaning up "static-ip-685000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-685000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-685000: (2.597024501s)
--- PASS: TestKicStaticIP (31.35s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (63.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-013000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-013000 --driver=docker : (27.095015985s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-015000 --driver=docker 
E0223 14:21:40.153488   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-015000 --driver=docker : (29.090695489s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-013000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-015000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-015000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-015000: (2.58372784s)
helpers_test.go:175: Cleaning up "first-013000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-013000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-013000: (2.592953397s)
--- PASS: TestMinikubeProfile (63.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-990000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-990000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.093733567s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-990000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-004000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-004000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.002926373s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-004000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-990000 --alsologtostderr -v=5
E0223 14:22:06.097289   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-990000 --alsologtostderr -v=5: (2.119067689s)
--- PASS: TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-004000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-004000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-004000: (1.588149179s)
--- PASS: TestMountStart/serial/Stop (1.59s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.07s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-004000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-004000: (5.065442216s)
--- PASS: TestMountStart/serial/RestartStopped (6.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-004000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-359000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0223 14:23:03.202586   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-359000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m15.993896127s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-359000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-359000 -v 3 --alsologtostderr: (20.941693852s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr: (1.154475604s)
--- PASS: TestMultiNode/serial/AddNode (22.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp testdata/cp-test.txt multinode-359000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1634849575/001/cp-test_multinode-359000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000:/home/docker/cp-test.txt multinode-359000-m02:/home/docker/cp-test_multinode-359000_multinode-359000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m02 "sudo cat /home/docker/cp-test_multinode-359000_multinode-359000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000:/home/docker/cp-test.txt multinode-359000-m03:/home/docker/cp-test_multinode-359000_multinode-359000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m03 "sudo cat /home/docker/cp-test_multinode-359000_multinode-359000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp testdata/cp-test.txt multinode-359000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1634849575/001/cp-test_multinode-359000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000-m02:/home/docker/cp-test.txt multinode-359000:/home/docker/cp-test_multinode-359000-m02_multinode-359000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000 "sudo cat /home/docker/cp-test_multinode-359000-m02_multinode-359000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000-m02:/home/docker/cp-test.txt multinode-359000-m03:/home/docker/cp-test_multinode-359000-m02_multinode-359000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m03 "sudo cat /home/docker/cp-test_multinode-359000-m02_multinode-359000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp testdata/cp-test.txt multinode-359000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1634849575/001/cp-test_multinode-359000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000-m03:/home/docker/cp-test.txt multinode-359000:/home/docker/cp-test_multinode-359000-m03_multinode-359000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000 "sudo cat /home/docker/cp-test_multinode-359000-m03_multinode-359000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 cp multinode-359000-m03:/home/docker/cp-test.txt multinode-359000-m02:/home/docker/cp-test_multinode-359000-m03_multinode-359000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 ssh -n multinode-359000-m02 "sudo cat /home/docker/cp-test_multinode-359000-m03_multinode-359000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-359000 node stop m03: (1.512403114s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-359000 status: exit status 7 (746.484254ms)

                                                
                                                
-- stdout --
	multinode-359000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-359000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-359000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr: exit status 7 (741.897043ms)

                                                
                                                
-- stdout --
	multinode-359000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-359000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-359000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:24:27.622674   21696 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:24:27.622855   21696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:24:27.622860   21696 out.go:309] Setting ErrFile to fd 2...
	I0223 14:24:27.622864   21696 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:24:27.622976   21696 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:24:27.623156   21696 out.go:303] Setting JSON to false
	I0223 14:24:27.623181   21696 mustload.go:65] Loading cluster: multinode-359000
	I0223 14:24:27.623232   21696 notify.go:220] Checking for updates...
	I0223 14:24:27.623507   21696 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:24:27.623518   21696 status.go:255] checking status of multinode-359000 ...
	I0223 14:24:27.623894   21696 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:24:27.681446   21696 status.go:330] multinode-359000 host status = "Running" (err=<nil>)
	I0223 14:24:27.681473   21696 host.go:66] Checking if "multinode-359000" exists ...
	I0223 14:24:27.681719   21696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000
	I0223 14:24:27.738063   21696 host.go:66] Checking if "multinode-359000" exists ...
	I0223 14:24:27.738338   21696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:24:27.738398   21696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:24:27.796314   21696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58730 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000/id_rsa Username:docker}
	I0223 14:24:27.887596   21696 ssh_runner.go:195] Run: systemctl --version
	I0223 14:24:27.891997   21696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:24:27.901659   21696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-359000
	I0223 14:24:27.959491   21696 kubeconfig.go:92] found "multinode-359000" server: "https://127.0.0.1:58734"
	I0223 14:24:27.959517   21696 api_server.go:165] Checking apiserver status ...
	I0223 14:24:27.959559   21696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 14:24:27.969618   21696 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2006/cgroup
	W0223 14:24:27.977822   21696 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2006/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0223 14:24:27.977884   21696 ssh_runner.go:195] Run: ls
	I0223 14:24:27.981706   21696 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58734/healthz ...
	I0223 14:24:27.986521   21696 api_server.go:278] https://127.0.0.1:58734/healthz returned 200:
	ok
	I0223 14:24:27.986533   21696 status.go:421] multinode-359000 apiserver status = Running (err=<nil>)
	I0223 14:24:27.986544   21696 status.go:257] multinode-359000 status: &{Name:multinode-359000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 14:24:27.986556   21696 status.go:255] checking status of multinode-359000-m02 ...
	I0223 14:24:27.986788   21696 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Status}}
	I0223 14:24:28.045993   21696 status.go:330] multinode-359000-m02 host status = "Running" (err=<nil>)
	I0223 14:24:28.046016   21696 host.go:66] Checking if "multinode-359000-m02" exists ...
	I0223 14:24:28.046287   21696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-359000-m02
	I0223 14:24:28.103681   21696 host.go:66] Checking if "multinode-359000-m02" exists ...
	I0223 14:24:28.103951   21696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 14:24:28.104006   21696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-359000-m02
	I0223 14:24:28.160878   21696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58798 SSHKeyPath:/Users/jenkins/minikube-integration/15909-14738/.minikube/machines/multinode-359000-m02/id_rsa Username:docker}
	I0223 14:24:28.251250   21696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 14:24:28.261102   21696 status.go:257] multinode-359000-m02 status: &{Name:multinode-359000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0223 14:24:28.261122   21696 status.go:255] checking status of multinode-359000-m03 ...
	I0223 14:24:28.261409   21696 cli_runner.go:164] Run: docker container inspect multinode-359000-m03 --format={{.State.Status}}
	I0223 14:24:28.317798   21696 status.go:330] multinode-359000-m03 host status = "Stopped" (err=<nil>)
	I0223 14:24:28.317819   21696 status.go:343] host is not running, skipping remaining checks
	I0223 14:24:28.317830   21696 status.go:257] multinode-359000-m03 status: &{Name:multinode-359000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-359000 node start m03 --alsologtostderr: (9.061127183s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-359000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-359000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-359000: (23.070016193s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-359000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-359000 --wait=true -v=8 --alsologtostderr: (1m4.023029771s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-359000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-359000 node delete m03: (5.259751525s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-359000 stop: (21.526195103s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-359000 status: exit status 7 (158.806077ms)

                                                
                                                
-- stdout --
	multinode-359000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-359000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr: exit status 7 (164.460211ms)

                                                
                                                
-- stdout --
	multinode-359000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-359000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 14:26:33.507104   22235 out.go:296] Setting OutFile to fd 1 ...
	I0223 14:26:33.507278   22235 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:26:33.507284   22235 out.go:309] Setting ErrFile to fd 2...
	I0223 14:26:33.507288   22235 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 14:26:33.507406   22235 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-14738/.minikube/bin
	I0223 14:26:33.507591   22235 out.go:303] Setting JSON to false
	I0223 14:26:33.507615   22235 mustload.go:65] Loading cluster: multinode-359000
	I0223 14:26:33.507667   22235 notify.go:220] Checking for updates...
	I0223 14:26:33.507901   22235 config.go:182] Loaded profile config "multinode-359000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 14:26:33.507914   22235 status.go:255] checking status of multinode-359000 ...
	I0223 14:26:33.508323   22235 cli_runner.go:164] Run: docker container inspect multinode-359000 --format={{.State.Status}}
	I0223 14:26:33.565150   22235 status.go:330] multinode-359000 host status = "Stopped" (err=<nil>)
	I0223 14:26:33.565176   22235 status.go:343] host is not running, skipping remaining checks
	I0223 14:26:33.565187   22235 status.go:257] multinode-359000 status: &{Name:multinode-359000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 14:26:33.565219   22235 status.go:255] checking status of multinode-359000-m02 ...
	I0223 14:26:33.565483   22235 cli_runner.go:164] Run: docker container inspect multinode-359000-m02 --format={{.State.Status}}
	I0223 14:26:33.624743   22235 status.go:330] multinode-359000-m02 host status = "Stopped" (err=<nil>)
	I0223 14:26:33.624768   22235 status.go:343] host is not running, skipping remaining checks
	I0223 14:26:33.624776   22235 status.go:257] multinode-359000-m02 status: &{Name:multinode-359000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (71.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-359000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0223 14:26:40.157312   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:27:06.096969   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-359000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m10.409529462s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-359000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (71.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-359000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-359000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-359000-m02 --driver=docker : exit status 14 (557.101969ms)

                                                
                                                
-- stdout --
	* [multinode-359000-m02] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-359000-m02' is duplicated with machine name 'multinode-359000-m02' in profile 'multinode-359000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-359000-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-359000-m03 --driver=docker : (29.212608145s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-359000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-359000: exit status 80 (475.007991ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-359000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-359000-m03 already exists in multinode-359000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-359000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-359000-m03: (2.592509405s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.88s)

                                                
                                    
x
+
TestPreload (135.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-598000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0223 14:28:29.153061   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-598000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m3.290846178s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-598000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-598000 -- docker pull gcr.io/k8s-minikube/busybox: (7.710001524s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-598000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-598000: (10.838501924s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-598000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-598000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (50.86023278s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-598000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-598000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-598000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-598000: (2.661954524s)
--- PASS: TestPreload (135.77s)

                                                
                                    
x
+
TestScheduledStopUnix (103.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-566000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-566000 --memory=2048 --driver=docker : (28.916060141s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-566000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-566000 -n scheduled-stop-566000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-566000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-566000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-566000 -n scheduled-stop-566000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-566000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-566000 --schedule 15s
E0223 14:31:40.158040   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0223 14:32:06.100602   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-566000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-566000: exit status 7 (108.854949ms)

                                                
                                                
-- stdout --
	scheduled-stop-566000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-566000 -n scheduled-stop-566000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-566000 -n scheduled-stop-566000: exit status 7 (105.79209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-566000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-566000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-566000: (2.311697124s)
--- PASS: TestScheduledStopUnix (103.07s)

                                                
                                    
x
+
TestSkaffold (69.82s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1649506237 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-583000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-583000 --memory=2600 --driver=docker : (32.063402648s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1649506237 run --minikube-profile skaffold-583000 --kube-context skaffold-583000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe1649506237 run --minikube-profile skaffold-583000 --kube-context skaffold-583000 --status-check=true --port-forward=false --interactive=false: (18.000604537s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7d48647cb9-584fr" [2eaadd81-d43f-45fc-ba43-7648001f27d7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.015190334s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-78b675f874-qg925" [511bf831-cbbf-4029-8d61-cfdd43e4342f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00701313s
helpers_test.go:175: Cleaning up "skaffold-583000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-583000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-583000: (2.899303246s)
--- PASS: TestSkaffold (69.82s)

                                                
                                    
x
+
TestInsufficientStorage (14.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-275000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-275000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.078473936s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"49196547-0dab-4bc0-9108-0a5fceb6818b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-275000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d294de3-1c54-4800-8332-8c44c21e1c6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"33a9ebc0-8cfc-4a47-8ade-6b76d68f27b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig"}}
	{"specversion":"1.0","id":"563743b0-1b2b-4fa5-9e17-2337f59eedd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"ed4a1044-b750-4da7-8e84-8990040c816b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3f3210cd-83e8-4579-8ff9-d01da4dd4500","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube"}}
	{"specversion":"1.0","id":"e6699fb7-380a-4140-ba6e-f6eb61d57265","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ef260b22-9da2-427c-ab53-bc103543ebe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0d148bb2-ba96-4622-b7d3-49cd822152ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"be2f69ce-5246-4abb-98b9-3599549d3afe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc088efd-f136-4a79-b770-397ce986a777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"c3b00eec-09f5-4aec-ab40-fea5dbd75cdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-275000 in cluster insufficient-storage-275000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ef158bd-4b2a-46ed-8680-5edfb0c3ecc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b54b252-e734-42e3-a2a7-064fc1e17c69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f8ee96c-9ca0-42b6-8e81-74d6e0d23074","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-275000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-275000 --output=json --layout=cluster: exit status 7 (387.045356ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-275000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-275000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:33:46.658241   24060 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-275000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-275000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-275000 --output=json --layout=cluster: exit status 7 (384.783016ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-275000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-275000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 14:33:47.043652   24070 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-275000" does not appear in /Users/jenkins/minikube-integration/15909-14738/kubeconfig
	E0223 14:33:47.052670   24070 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/insufficient-storage-275000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-275000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-275000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-275000: (2.343053747s)
--- PASS: TestInsufficientStorage (14.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-757000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-757000: (3.451893004s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.45s)

                                                
                                    
x
+
TestPause/serial/Start (48.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-731000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-731000 --memory=2048 --install-addons=false --wait=all --driver=docker : (48.993244621s)
--- PASS: TestPause/serial/Start (48.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-731000 --alsologtostderr -v=1 --driver=docker 
E0223 14:38:22.280920   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:22.286197   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:22.297739   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:22.318106   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:22.358874   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:22.439601   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:22.601186   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:22.921408   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:23.562567   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:24.843140   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:27.403315   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:32.524446   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:38:42.766635   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-731000 --alsologtostderr -v=1 --driver=docker : (42.798423928s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.81s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-731000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-731000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-731000 --output=json --layout=cluster: exit status 2 (514.374459ms)

                                                
                                                
-- stdout --
	{"Name":"pause-731000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-731000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.51s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-731000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-731000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-731000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-731000 --alsologtostderr -v=5: (2.757707708s)
--- PASS: TestPause/serial/DeletePaused (2.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-731000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-731000: exit status 1 (54.4501ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-731000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-972000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-972000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (724.730577ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-972000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-14738/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-972000 --driver=docker 
E0223 14:39:03.246998   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-972000 --driver=docker : (32.633868954s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-972000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-972000 --no-kubernetes --driver=docker 
E0223 14:39:43.209431   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:39:44.207528   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-972000 --no-kubernetes --driver=docker : (14.803622057s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-972000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-972000 status -o json: exit status 2 (397.700556ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-972000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-972000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-972000: (2.381340652s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-972000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-972000 --no-kubernetes --driver=docker : (7.289019406s)
--- PASS: TestNoKubernetes/serial/Start (7.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-972000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-972000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (374.963237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (15.494223874s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-972000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-972000: (1.59805356s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-972000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-972000 --driver=docker : (4.927064864s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-972000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-972000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (420.237649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.42s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (18.5s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1037820938/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1037820938/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1037820938/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1037820938/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
E0223 14:41:06.130429   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (18.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (20.1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-14738/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2259651683/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2259651683/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2259651683/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2259651683/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (20.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (44.316565185s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-flkvk" [6d74066f-f6b7-4175-a999-dde970e89406] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-flkvk" [6d74066f-f6b7-4175-a999-dde970e89406] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.008407893s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (1m5.636555681s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0223 14:46:40.272791   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:47:06.214964   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m13.360396073s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-h9pwr" [b497914b-d490-4639-8560-d50c83272f26] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.014061848s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2c47z" [b3af2a67-0d3f-44c1-831d-da1c9156300f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-2c47z" [b3af2a67-0d3f-44c1-831d-da1c9156300f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.008053439s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7bnxj" [5aa47f78-29f4-4c7e-b3bf-23f551f61796] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.014822109s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-jm974" [44f5a2c3-be88-491b-9a9f-bc9c9764ee29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-jm974" [44f5a2c3-be88-491b-9a9f-bc9c9764ee29] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.022878658s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (1m13.257782797s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (44.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (44.494599076s)
--- PASS: TestNetworkPlugins/group/false/Start (44.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-gswt8" [23c73c92-e39f-478f-95f9-cd745cfe3a55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-gswt8" [23c73c92-e39f-478f-95f9-cd745cfe3a55] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.008942287s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (17.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bs6xp" [d9c5acf1-8a16-4836-a3ec-2b6fe7549d66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-bs6xp" [d9c5acf1-8a16-4836-a3ec-2b6fe7549d66] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 17.007983598s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (17.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (48.359838922s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (57.131954345s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-v8gzw" [be8ff619-16b6-466c-a649-02d7b6a05816] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-v8gzw" [be8ff619-16b6-466c-a649-02d7b6a05816] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.008600247s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-b59kc" [e9da3405-06ac-4264-9930-ae886949f678] Running
E0223 14:51:04.844626   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.013680198s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-cb977" [19e761e6-1c7b-42db-a4b1-d8b3d98bfe7a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 14:51:09.965018   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-cb977" [19e761e6-1c7b-42db-a4b1-d8b3d98bfe7a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.009194232s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (44.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0223 14:51:20.205537   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (44.08675582s)
--- PASS: TestNetworkPlugins/group/bridge/Start (44.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (57.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-452000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (57.877861562s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (57.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dgxdr" [4aafd911-e2ab-40eb-a7d5-62bb61cadb57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 14:52:06.223585   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/functional-769000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-dgxdr" [4aafd911-e2ab-40eb-a7d5-62bb61cadb57] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008207863s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-452000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-452000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-g29mc" [4e37a120-fe46-4425-80aa-7f8eb5b8121f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 14:52:48.238271   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
E0223 14:52:50.488801   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:50.494327   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:50.504505   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:50.524628   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:50.564921   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:50.645111   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:50.805642   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:51.125745   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:51.766289   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-g29mc" [4e37a120-fe46-4425-80aa-7f8eb5b8121f] Running
E0223 14:52:53.046774   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:52:55.607002   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.009520863s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-452000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-452000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0223 14:53:22.405212   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
E0223 14:53:31.449650   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:53:43.570414   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
E0223 14:53:49.680827   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
E0223 14:54:12.412123   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:54:19.421033   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:19.427210   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:19.438201   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:19.458314   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:19.498935   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:19.579031   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:19.739180   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:20.059576   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:20.699795   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:21.980234   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:23.226649   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:23.233107   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:23.243732   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:23.264367   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:23.304710   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:23.385698   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:23.546030   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:23.868077   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:24.508567   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:24.541240   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:25.790250   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:28.351202   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:29.661656   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:54:33.472046   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (1m16.675833363s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-436000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [52922cdf-09c8-46dd-9771-5aab24a91892] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0223 14:54:39.902432   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [52922cdf-09c8-46dd-9771-5aab24a91892] Running
E0223 14:54:43.713039   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:54:45.458353   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/skaffold-583000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.014675305s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-436000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-436000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-436000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-436000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-436000 --alsologtostderr -v=3: (11.030581376s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000: exit status 7 (102.979515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-436000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (557.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0223 14:55:00.384056   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:55:04.193863   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:55:11.605476   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
E0223 14:55:34.334802   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/calico-452000/client.crt: no such file or directory
E0223 14:55:41.347546   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
E0223 14:55:44.160417   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:44.166869   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:44.177971   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:44.198240   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:44.238459   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:44.319208   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:44.481518   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:44.801679   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:45.155542   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
E0223 14:55:45.441833   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:46.722041   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:49.282229   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:54.404636   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:55:59.735006   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
E0223 14:56:03.908780   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:03.915200   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:03.925441   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:03.945592   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:04.035050   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:04.117347   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:04.277774   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:04.598048   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:04.645836   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:56:05.238176   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:06.518388   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:09.079043   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:14.200639   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:23.339793   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:56:24.441320   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 14:56:25.126775   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 14:56:27.415900   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
E0223 14:56:40.290162   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
E0223 14:56:44.922734   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-436000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (9m17.269376151s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-436000 -n no-preload-436000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (557.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-919000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-919000 --alsologtostderr -v=3: (1.578775723s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (103.996796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-919000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jcqkb" [2ac19bed-d1a2-4f67-9c9a-585590590646] Running
E0223 15:04:19.438346   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/false-452000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01317754s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-jcqkb" [2ac19bed-d1a2-4f67-9c9a-585590590646] Running
E0223 15:04:23.244043   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/custom-flannel-452000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007956246s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-436000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-436000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-436000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-436000 -n no-preload-436000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-436000 -n no-preload-436000: exit status 2 (409.523683ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-436000 -n no-preload-436000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-436000 -n no-preload-436000: exit status 2 (408.358639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-436000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-436000 -n no-preload-436000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-436000 -n no-preload-436000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-938000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0223 15:04:37.713837   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:37.719078   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:37.729195   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:37.749377   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:37.789671   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:37.869781   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:38.029937   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:38.350042   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:38.990463   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:40.287844   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:42.848064   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:47.970328   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:04:58.210800   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:05:18.692672   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-938000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (48.818727418s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-938000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da533258-679b-4d12-85bc-9b634a8f7f44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da533258-679b-4d12-85bc-9b634a8f7f44] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.014138251s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-938000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-938000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-938000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-938000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-938000 --alsologtostderr -v=3: (10.918582612s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000: exit status 7 (102.949363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-938000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (307.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-938000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0223 15:05:44.177530   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/enable-default-cni-452000/client.crt: no such file or directory
E0223 15:05:59.655508   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/no-preload-436000/client.crt: no such file or directory
E0223 15:05:59.752853   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
E0223 15:06:03.926026   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
E0223 15:06:40.308738   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/addons-034000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-938000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (5m7.008108714s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (307.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-724p9" [74fb1c58-7cb0-4c94-9695-3bff7ca392db] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-724p9" [74fb1c58-7cb0-4c94-9695-3bff7ca392db] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.015297392s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-724p9" [74fb1c58-7cb0-4c94-9695-3bff7ca392db] Running
E0223 15:10:59.767772   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/auto-452000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007468387s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-938000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-938000 "sudo crictl images -o json"
E0223 15:11:03.941568   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-938000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000: exit status 2 (410.324165ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000: exit status 2 (412.519824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-938000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-938000 -n default-k8s-diff-port-938000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-835000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-835000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (42.205415009s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-835000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-835000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-835000 --alsologtostderr -v=3: (5.837541005s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-835000 -n newest-cni-835000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-835000 -n newest-cni-835000: exit status 7 (103.622922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-835000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-835000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-835000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (24.066686005s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-835000 -n newest-cni-835000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-835000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-835000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-835000 -n newest-cni-835000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-835000 -n newest-cni-835000: exit status 2 (411.958555ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-835000 -n newest-cni-835000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-835000 -n newest-cni-835000: exit status 2 (411.275108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-835000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-835000 -n newest-cni-835000
E0223 15:12:27.038716   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/flannel-452000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-835000 -n newest-cni-835000
E0223 15:12:27.709881   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/kindnet-452000/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-057000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-057000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (46.004123684s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-057000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [02bb2b99-3141-4757-9e0d-2fe2f15ed714] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [02bb2b99-3141-4757-9e0d-2fe2f15ed714] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.016220275s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-057000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-057000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0223 15:13:27.604473   15210 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-14738/.minikube/profiles/bridge-452000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-057000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-057000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-057000 --alsologtostderr -v=3: (10.976035432s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-057000 -n embed-certs-057000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-057000 -n embed-certs-057000: exit status 7 (104.330353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-057000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (554.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-057000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-057000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (9m14.219994395s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-057000 -n embed-certs-057000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (554.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lrtxm" [7b57f47c-5637-48f3-b30c-923f2bc2532e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014043788s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-lrtxm" [7b57f47c-5637-48f3-b30c-923f2bc2532e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01035132s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-057000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-057000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-057000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-057000 -n embed-certs-057000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-057000 -n embed-certs-057000: exit status 2 (411.772888ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-057000 -n embed-certs-057000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-057000 -n embed-certs-057000: exit status 2 (410.021391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-057000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-057000 -n embed-certs-057000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-057000 -n embed-certs-057000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 10.999021ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-b9vbl" [3207048f-8d2b-4b85-a0b3-2a9f58146f9d] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011106387s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-z2cgt" [b6959c8e-776b-4120-a1c3-9ca72c89d682] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009991163s
addons_test.go:305: (dbg) Run:  kubectl --context addons-034000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-034000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-034000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.322155772s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-034000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-034000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-034000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [53b33308-aa74-420e-99fa-207a1c1cf154] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [53b33308-aa74-420e-99fa-207a1c1cf154] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.008394476s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-034000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.34s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-769000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-769000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-lkvst" [5e7f91f0-cea3-47b6-b566-dea814a948ab] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-lkvst" [5e7f91f0-cea3-47b6-b566-dea814a948ab] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.008399924s
functional_test.go:1614: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-452000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-452000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-452000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-452000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-452000"

                                                
                                                
----------------------- debugLogs end: cilium-452000 [took: 5.375363867s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-452000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-452000
--- SKIP: TestNetworkPlugins/group/cilium (5.88s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-500000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-500000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
Copied to clipboard