Test Report: Docker_macOS 15909

                    
                      7e38a61ac9e37ff976ead0e1828eff55bc8f945b:2023-02-24:28054
                    
                

Test fail (16/306)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (264.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-721000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0224 14:55:10.302785   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:55:37.996010   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:55:54.062901   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.069298   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.081452   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.102145   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.143589   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.224738   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.384912   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:54.705149   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:55.347512   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:56.627934   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:55:59.188486   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:56:04.309901   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:56:14.551737   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:56:35.032477   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 14:57:15.995058   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-721000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m24.607272475s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-721000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-721000 in cluster ingress-addon-legacy-721000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 14:52:58.181934   30059 out.go:296] Setting OutFile to fd 1 ...
	I0224 14:52:58.182105   30059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:52:58.182110   30059 out.go:309] Setting ErrFile to fd 2...
	I0224 14:52:58.182114   30059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:52:58.182225   30059 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 14:52:58.183600   30059 out.go:303] Setting JSON to false
	I0224 14:52:58.201854   30059 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6752,"bootTime":1677272426,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 14:52:58.201955   30059 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 14:52:58.223317   30059 out.go:177] * [ingress-addon-legacy-721000] minikube v1.29.0 on Darwin 13.2.1
	I0224 14:52:58.266413   30059 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 14:52:58.266431   30059 notify.go:220] Checking for updates...
	I0224 14:52:58.310399   30059 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 14:52:58.332542   30059 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 14:52:58.354373   30059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 14:52:58.376569   30059 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 14:52:58.398727   30059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 14:52:58.420804   30059 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 14:52:58.482054   30059 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 14:52:58.482185   30059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 14:52:58.623018   30059 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 22:52:58.531784684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 14:52:58.645517   30059 out.go:177] * Using the docker driver based on user configuration
	I0224 14:52:58.671826   30059 start.go:296] selected driver: docker
	I0224 14:52:58.671855   30059 start.go:857] validating driver "docker" against <nil>
	I0224 14:52:58.671876   30059 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 14:52:58.675800   30059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 14:52:58.816885   30059 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 22:52:58.72579646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 14:52:58.817014   30059 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 14:52:58.817197   30059 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 14:52:58.838927   30059 out.go:177] * Using Docker Desktop driver with root privileges
	I0224 14:52:58.860823   30059 cni.go:84] Creating CNI manager for ""
	I0224 14:52:58.860876   30059 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 14:52:58.860893   30059 start_flags.go:319] config:
	{Name:ingress-addon-legacy-721000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-721000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 14:52:58.904250   30059 out.go:177] * Starting control plane node ingress-addon-legacy-721000 in cluster ingress-addon-legacy-721000
	I0224 14:52:58.925628   30059 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 14:52:58.946517   30059 out.go:177] * Pulling base image ...
	I0224 14:52:58.988704   30059 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 14:52:58.988707   30059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0224 14:52:59.046143   30059 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 14:52:59.046166   30059 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 14:52:59.097159   30059 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0224 14:52:59.097237   30059 cache.go:57] Caching tarball of preloaded images
	I0224 14:52:59.097728   30059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0224 14:52:59.119627   30059 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0224 14:52:59.162334   30059 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:52:59.377767   30059 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0224 14:53:11.832816   30059 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:53:11.832994   30059 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:53:12.441239   30059 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0224 14:53:12.441544   30059 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/config.json ...
	I0224 14:53:12.441572   30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/config.json: {Name:mk52ddce85e7b1119aa1adde8d4c66620a5d3735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:53:12.441931   30059 cache.go:193] Successfully downloaded all kic artifacts
	I0224 14:53:12.441956   30059 start.go:364] acquiring machines lock for ingress-addon-legacy-721000: {Name:mkf84ae28139f6b533abe522fecd4e33229d5580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 14:53:12.442116   30059 start.go:368] acquired machines lock for "ingress-addon-legacy-721000" in 153.124µs
	I0224 14:53:12.442139   30059 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-721000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-721000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 14:53:12.442244   30059 start.go:125] createHost starting for "" (driver="docker")
	I0224 14:53:12.505380   30059 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0224 14:53:12.505660   30059 start.go:159] libmachine.API.Create for "ingress-addon-legacy-721000" (driver="docker")
	I0224 14:53:12.505704   30059 client.go:168] LocalClient.Create starting
	I0224 14:53:12.505905   30059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
	I0224 14:53:12.505997   30059 main.go:141] libmachine: Decoding PEM data...
	I0224 14:53:12.506034   30059 main.go:141] libmachine: Parsing certificate...
	I0224 14:53:12.506140   30059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
	I0224 14:53:12.506202   30059 main.go:141] libmachine: Decoding PEM data...
	I0224 14:53:12.506219   30059 main.go:141] libmachine: Parsing certificate...
	I0224 14:53:12.507008   30059 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-721000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 14:53:12.564667   30059 cli_runner.go:211] docker network inspect ingress-addon-legacy-721000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 14:53:12.564776   30059 network_create.go:281] running [docker network inspect ingress-addon-legacy-721000] to gather additional debugging logs...
	I0224 14:53:12.564794   30059 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-721000
	W0224 14:53:12.618658   30059 cli_runner.go:211] docker network inspect ingress-addon-legacy-721000 returned with exit code 1
	I0224 14:53:12.618683   30059 network_create.go:284] error running [docker network inspect ingress-addon-legacy-721000]: docker network inspect ingress-addon-legacy-721000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-721000
	I0224 14:53:12.618703   30059 network_create.go:286] output of [docker network inspect ingress-addon-legacy-721000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-721000
	
	** /stderr **
	I0224 14:53:12.618798   30059 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 14:53:12.673013   30059 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00052d630}
	I0224 14:53:12.673045   30059 network_create.go:123] attempt to create docker network ingress-addon-legacy-721000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0224 14:53:12.673113   30059 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-721000 ingress-addon-legacy-721000
	I0224 14:53:12.761514   30059 network_create.go:107] docker network ingress-addon-legacy-721000 192.168.49.0/24 created
	I0224 14:53:12.761550   30059 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-721000" container
	I0224 14:53:12.761662   30059 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 14:53:12.816351   30059 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-721000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-721000 --label created_by.minikube.sigs.k8s.io=true
	I0224 14:53:12.873125   30059 oci.go:103] Successfully created a docker volume ingress-addon-legacy-721000
	I0224 14:53:12.873250   30059 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-721000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-721000 --entrypoint /usr/bin/test -v ingress-addon-legacy-721000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 14:53:13.301802   30059 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-721000
	I0224 14:53:13.301848   30059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0224 14:53:13.301863   30059 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 14:53:13.301986   30059 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-721000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 14:53:19.681973   30059 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-721000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.379750721s)
	I0224 14:53:19.681993   30059 kic.go:199] duration metric: took 6.380015 seconds to extract preloaded images to volume
	I0224 14:53:19.682114   30059 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 14:53:19.827636   30059 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-721000 --name ingress-addon-legacy-721000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-721000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-721000 --network ingress-addon-legacy-721000 --ip 192.168.49.2 --volume ingress-addon-legacy-721000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 14:53:20.302105   30059 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Running}}
	I0224 14:53:20.363536   30059 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Status}}
	I0224 14:53:20.425966   30059 cli_runner.go:164] Run: docker exec ingress-addon-legacy-721000 stat /var/lib/dpkg/alternatives/iptables
	I0224 14:53:20.542647   30059 oci.go:144] the created container "ingress-addon-legacy-721000" has a running status.
	I0224 14:53:20.542681   30059 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa...
	I0224 14:53:20.600825   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 14:53:20.600915   30059 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 14:53:20.710672   30059 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Status}}
	I0224 14:53:20.772336   30059 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 14:53:20.772358   30059 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-721000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 14:53:20.877755   30059 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Status}}
	I0224 14:53:20.935523   30059 machine.go:88] provisioning docker machine ...
	I0224 14:53:20.935566   30059 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-721000"
	I0224 14:53:20.935668   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:20.992840   30059 main.go:141] libmachine: Using SSH client type: native
	I0224 14:53:20.993240   30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57495 <nil> <nil>}
	I0224 14:53:20.993258   30059 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-721000 && echo "ingress-addon-legacy-721000" | sudo tee /etc/hostname
	I0224 14:53:21.137995   30059 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-721000
	
	I0224 14:53:21.138091   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:21.195659   30059 main.go:141] libmachine: Using SSH client type: native
	I0224 14:53:21.196010   30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57495 <nil> <nil>}
	I0224 14:53:21.196026   30059 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-721000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-721000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-721000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 14:53:21.331659   30059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 14:53:21.331681   30059 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 14:53:21.331701   30059 ubuntu.go:177] setting up certificates
	I0224 14:53:21.331706   30059 provision.go:83] configureAuth start
	I0224 14:53:21.331777   30059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-721000
	I0224 14:53:21.388148   30059 provision.go:138] copyHostCerts
	I0224 14:53:21.388196   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 14:53:21.388258   30059 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 14:53:21.388266   30059 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 14:53:21.388373   30059 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 14:53:21.388534   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 14:53:21.388565   30059 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 14:53:21.388570   30059 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 14:53:21.388632   30059 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 14:53:21.388748   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 14:53:21.388785   30059 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 14:53:21.388791   30059 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 14:53:21.388854   30059 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 14:53:21.388975   30059 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-721000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-721000]
	I0224 14:53:21.637517   30059 provision.go:172] copyRemoteCerts
	I0224 14:53:21.637575   30059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 14:53:21.637631   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:21.695767   30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
	I0224 14:53:21.791725   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 14:53:21.791812   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0224 14:53:21.808984   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 14:53:21.809056   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 14:53:21.826214   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 14:53:21.826281   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 14:53:21.843433   30059 provision.go:86] duration metric: configureAuth took 511.700655ms
	I0224 14:53:21.843448   30059 ubuntu.go:193] setting minikube options for container-runtime
	I0224 14:53:21.843612   30059 config.go:182] Loaded profile config "ingress-addon-legacy-721000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0224 14:53:21.843681   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:21.902376   30059 main.go:141] libmachine: Using SSH client type: native
	I0224 14:53:21.902750   30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57495 <nil> <nil>}
	I0224 14:53:21.902767   30059 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 14:53:22.039549   30059 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 14:53:22.039561   30059 ubuntu.go:71] root file system type: overlay
	I0224 14:53:22.039652   30059 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 14:53:22.039744   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:22.095247   30059 main.go:141] libmachine: Using SSH client type: native
	I0224 14:53:22.095608   30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57495 <nil> <nil>}
	I0224 14:53:22.095655   30059 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 14:53:22.238926   30059 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 14:53:22.239016   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:22.295941   30059 main.go:141] libmachine: Using SSH client type: native
	I0224 14:53:22.296294   30059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 57495 <nil> <nil>}
	I0224 14:53:22.296306   30059 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 14:53:22.918023   30059 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 22:53:22.235900562 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 14:53:22.918049   30059 machine.go:91] provisioned docker machine in 1.982471367s
	I0224 14:53:22.918055   30059 client.go:171] LocalClient.Create took 10.41215729s
	I0224 14:53:22.918075   30059 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-721000" took 10.412228162s
	I0224 14:53:22.918087   30059 start.go:300] post-start starting for "ingress-addon-legacy-721000" (driver="docker")
	I0224 14:53:22.918094   30059 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 14:53:22.918181   30059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 14:53:22.918241   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:22.976772   30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
	I0224 14:53:23.074331   30059 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 14:53:23.078014   30059 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 14:53:23.078030   30059 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 14:53:23.078037   30059 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 14:53:23.078042   30059 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 14:53:23.078054   30059 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 14:53:23.078155   30059 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 14:53:23.078322   30059 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 14:53:23.078328   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /etc/ssl/certs/268712.pem
	I0224 14:53:23.078521   30059 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 14:53:23.085778   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 14:53:23.102853   30059 start.go:303] post-start completed in 184.75225ms
	I0224 14:53:23.103385   30059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-721000
	I0224 14:53:23.159511   30059 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/config.json ...
	I0224 14:53:23.159937   30059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 14:53:23.159992   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:23.217122   30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
	I0224 14:53:23.308695   30059 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 14:53:23.313580   30059 start.go:128] duration metric: createHost completed in 10.871113692s
	I0224 14:53:23.313599   30059 start.go:83] releasing machines lock for "ingress-addon-legacy-721000", held for 10.871278695s
	I0224 14:53:23.313684   30059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-721000
	I0224 14:53:23.370591   30059 ssh_runner.go:195] Run: cat /version.json
	I0224 14:53:23.370624   30059 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0224 14:53:23.370671   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:23.370693   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:23.432268   30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
	I0224 14:53:23.432457   30059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
	I0224 14:53:23.525315   30059 ssh_runner.go:195] Run: systemctl --version
	I0224 14:53:23.733260   30059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 14:53:23.738417   30059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 14:53:23.758420   30059 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 14:53:23.758490   30059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0224 14:53:23.772218   30059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0224 14:53:23.779905   30059 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 14:53:23.779918   30059 start.go:485] detecting cgroup driver to use...
	I0224 14:53:23.779928   30059 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 14:53:23.780008   30059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 14:53:23.793245   30059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0224 14:53:23.801898   30059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 14:53:23.810209   30059 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 14:53:23.810267   30059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 14:53:23.818908   30059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 14:53:23.827427   30059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 14:53:23.835826   30059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 14:53:23.844264   30059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 14:53:23.852293   30059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 14:53:23.860664   30059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 14:53:23.867842   30059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 14:53:23.874870   30059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 14:53:23.938881   30059 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 14:53:24.015057   30059 start.go:485] detecting cgroup driver to use...
	I0224 14:53:24.015075   30059 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 14:53:24.015153   30059 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 14:53:24.025726   30059 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 14:53:24.025796   30059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 14:53:24.036759   30059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 14:53:24.051055   30059 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 14:53:24.157945   30059 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 14:53:24.250060   30059 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 14:53:24.250092   30059 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 14:53:24.264107   30059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 14:53:24.360724   30059 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 14:53:24.584636   30059 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 14:53:24.611969   30059 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 14:53:24.659936   30059 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	I0224 14:53:24.660161   30059 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-721000 dig +short host.docker.internal
	I0224 14:53:24.779029   30059 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 14:53:24.779133   30059 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 14:53:24.783671   30059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 14:53:24.793656   30059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:53:24.851176   30059 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0224 14:53:24.851279   30059 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 14:53:24.871434   30059 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0224 14:53:24.871450   30059 docker.go:560] Images already preloaded, skipping extraction
	I0224 14:53:24.871519   30059 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 14:53:24.892340   30059 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0224 14:53:24.892364   30059 cache_images.go:84] Images are preloaded, skipping loading
	I0224 14:53:24.892451   30059 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 14:53:24.918539   30059 cni.go:84] Creating CNI manager for ""
	I0224 14:53:24.918558   30059 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 14:53:24.918574   30059 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 14:53:24.918596   30059 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-721000 NodeName:ingress-addon-legacy-721000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 14:53:24.918717   30059 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-721000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 14:53:24.918799   30059 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-721000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-721000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 14:53:24.918862   30059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0224 14:53:24.926933   30059 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 14:53:24.926997   30059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 14:53:24.934470   30059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0224 14:53:24.947171   30059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0224 14:53:24.960428   30059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0224 14:53:24.973779   30059 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0224 14:53:24.977828   30059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 14:53:24.987744   30059 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000 for IP: 192.168.49.2
	I0224 14:53:24.987764   30059 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:53:24.987931   30059 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 14:53:24.987998   30059 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 14:53:24.988041   30059 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.key
	I0224 14:53:24.988053   30059 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.crt with IP's: []
	I0224 14:53:25.082625   30059 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.crt ...
	I0224 14:53:25.082638   30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.crt: {Name:mk97312d2d5782f42d613977a91abf12f03f9ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:53:25.083020   30059 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.key ...
	I0224 14:53:25.083031   30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/client.key: {Name:mkaa96049743a9a4c17b08c87d839ddbfddefd1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:53:25.083269   30059 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key.dd3b5fb2
	I0224 14:53:25.083286   30059 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 14:53:25.148858   30059 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt.dd3b5fb2 ...
	I0224 14:53:25.148866   30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt.dd3b5fb2: {Name:mk73b3809824182651a8eadb9727dd5e66ad90f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:53:25.149094   30059 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key.dd3b5fb2 ...
	I0224 14:53:25.149102   30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key.dd3b5fb2: {Name:mkbb90123778aa98b26a330275094dc8b741bcc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:53:25.149288   30059 certs.go:333] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt
	I0224 14:53:25.149467   30059 certs.go:337] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key
	I0224 14:53:25.149619   30059 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key
	I0224 14:53:25.149633   30059 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt with IP's: []
	I0224 14:53:25.498835   30059 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt ...
	I0224 14:53:25.498851   30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt: {Name:mkdcda852c2995bc66118519dd0dcc2ab740c576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:53:25.499190   30059 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key ...
	I0224 14:53:25.499199   30059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key: {Name:mk252af65af1521443d09681340cbb1597b80fc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:53:25.499376   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0224 14:53:25.499416   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0224 14:53:25.499438   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0224 14:53:25.499458   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0224 14:53:25.499481   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 14:53:25.499501   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 14:53:25.499519   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 14:53:25.499538   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 14:53:25.499639   30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 14:53:25.499689   30059 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 14:53:25.499701   30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 14:53:25.499742   30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 14:53:25.499779   30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 14:53:25.499815   30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 14:53:25.499891   30059 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 14:53:25.499921   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 14:53:25.499940   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem -> /usr/share/ca-certificates/26871.pem
	I0224 14:53:25.499958   30059 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /usr/share/ca-certificates/268712.pem
	I0224 14:53:25.500467   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 14:53:25.519284   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 14:53:25.536452   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 14:53:25.553951   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/ingress-addon-legacy-721000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 14:53:25.571738   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 14:53:25.589206   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 14:53:25.606794   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 14:53:25.624109   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 14:53:25.641462   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 14:53:25.659127   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 14:53:25.676559   30059 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 14:53:25.693905   30059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 14:53:25.707358   30059 ssh_runner.go:195] Run: openssl version
	I0224 14:53:25.712867   30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 14:53:25.720767   30059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 14:53:25.724669   30059 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 14:53:25.724720   30059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 14:53:25.730114   30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 14:53:25.738468   30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 14:53:25.746600   30059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 14:53:25.750519   30059 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 14:53:25.750565   30059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 14:53:25.756244   30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 14:53:25.764451   30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 14:53:25.772474   30059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 14:53:25.776456   30059 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 14:53:25.776500   30059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 14:53:25.782044   30059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 14:53:25.790125   30059 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-721000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-721000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 14:53:25.790259   30059 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 14:53:25.809486   30059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 14:53:25.817271   30059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 14:53:25.824784   30059 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 14:53:25.824888   30059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 14:53:25.832950   30059 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 14:53:25.832980   30059 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 14:53:25.880682   30059 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0224 14:53:25.880750   30059 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 14:53:26.050215   30059 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 14:53:26.050293   30059 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 14:53:26.050379   30059 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 14:53:26.204451   30059 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 14:53:26.204963   30059 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 14:53:26.205004   30059 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 14:53:26.282649   30059 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 14:53:26.304336   30059 out.go:204]   - Generating certificates and keys ...
	I0224 14:53:26.304439   30059 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 14:53:26.304521   30059 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 14:53:26.610536   30059 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 14:53:26.870683   30059 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 14:53:27.023222   30059 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 14:53:27.093151   30059 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 14:53:27.473679   30059 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 14:53:27.473831   30059 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0224 14:53:27.569128   30059 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 14:53:27.569260   30059 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0224 14:53:27.645072   30059 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 14:53:27.850789   30059 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 14:53:28.031182   30059 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 14:53:28.031321   30059 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 14:53:28.240457   30059 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 14:53:28.307493   30059 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 14:53:28.591814   30059 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 14:53:28.804834   30059 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 14:53:28.805827   30059 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 14:53:28.826283   30059 out.go:204]   - Booting up control plane ...
	I0224 14:53:28.826465   30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 14:53:28.826624   30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 14:53:28.826738   30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 14:53:28.826867   30059 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 14:53:28.827249   30059 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 14:54:08.816546   30059 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0224 14:54:08.817953   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:54:08.818169   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:54:13.819939   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:54:13.820193   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:54:23.822140   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:54:23.822395   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:54:43.824430   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:54:43.824677   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:55:23.827417   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:55:23.827647   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:55:23.827661   30059 kubeadm.go:322] 
	I0224 14:55:23.827702   30059 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0224 14:55:23.827796   30059 kubeadm.go:322] 		timed out waiting for the condition
	I0224 14:55:23.827815   30059 kubeadm.go:322] 
	I0224 14:55:23.827854   30059 kubeadm.go:322] 	This error is likely caused by:
	I0224 14:55:23.827926   30059 kubeadm.go:322] 		- The kubelet is not running
	I0224 14:55:23.828082   30059 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 14:55:23.828098   30059 kubeadm.go:322] 
	I0224 14:55:23.828227   30059 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 14:55:23.828291   30059 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0224 14:55:23.828330   30059 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0224 14:55:23.828336   30059 kubeadm.go:322] 
	I0224 14:55:23.828460   30059 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 14:55:23.828581   30059 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 14:55:23.828595   30059 kubeadm.go:322] 
	I0224 14:55:23.828690   30059 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0224 14:55:23.828745   30059 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0224 14:55:23.828825   30059 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0224 14:55:23.828885   30059 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0224 14:55:23.828900   30059 kubeadm.go:322] 
	I0224 14:55:23.831155   30059 kubeadm.go:322] W0224 22:53:25.880120    1157 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0224 14:55:23.831320   30059 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 14:55:23.831400   30059 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 14:55:23.831506   30059 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0224 14:55:23.831588   30059 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 14:55:23.831700   30059 kubeadm.go:322] W0224 22:53:28.810292    1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0224 14:55:23.831803   30059 kubeadm.go:322] W0224 22:53:28.811314    1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0224 14:55:23.831877   30059 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 14:55:23.831954   30059 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0224 14:55:23.832152   30059 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 22:53:25.880120    1157 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 22:53:28.810292    1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 22:53:28.811314    1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-721000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 22:53:25.880120    1157 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 22:53:28.810292    1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 22:53:28.811314    1157 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 14:55:23.832189   30059 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0224 14:55:24.242180   30059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 14:55:24.251976   30059 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 14:55:24.252030   30059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 14:55:24.259547   30059 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 14:55:24.259570   30059 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 14:55:24.307087   30059 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0224 14:55:24.307145   30059 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 14:55:24.472699   30059 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 14:55:24.472796   30059 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 14:55:24.472891   30059 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 14:55:24.627183   30059 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 14:55:24.627591   30059 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 14:55:24.627639   30059 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 14:55:24.699699   30059 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 14:55:24.721311   30059 out.go:204]   - Generating certificates and keys ...
	I0224 14:55:24.721415   30059 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 14:55:24.721474   30059 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 14:55:24.721543   30059 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 14:55:24.721600   30059 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0224 14:55:24.721657   30059 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 14:55:24.721706   30059 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0224 14:55:24.721825   30059 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0224 14:55:24.721955   30059 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0224 14:55:24.722050   30059 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 14:55:24.722131   30059 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 14:55:24.722220   30059 kubeadm.go:322] [certs] Using the existing "sa" key
	I0224 14:55:24.722313   30059 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 14:55:24.803926   30059 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 14:55:24.928124   30059 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 14:55:25.043520   30059 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 14:55:25.193611   30059 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 14:55:25.194064   30059 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 14:55:25.215696   30059 out.go:204]   - Booting up control plane ...
	I0224 14:55:25.215944   30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 14:55:25.216099   30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 14:55:25.216207   30059 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 14:55:25.216343   30059 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 14:55:25.216596   30059 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 14:56:05.203591   30059 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0224 14:56:05.204505   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:56:05.204751   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:56:10.205510   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:56:10.205741   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:56:20.207797   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:56:20.208048   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:56:40.209838   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:56:40.210072   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:57:20.212217   30059 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 14:57:20.212451   30059 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 14:57:20.212469   30059 kubeadm.go:322] 
	I0224 14:57:20.212537   30059 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0224 14:57:20.212582   30059 kubeadm.go:322] 		timed out waiting for the condition
	I0224 14:57:20.212588   30059 kubeadm.go:322] 
	I0224 14:57:20.212622   30059 kubeadm.go:322] 	This error is likely caused by:
	I0224 14:57:20.212671   30059 kubeadm.go:322] 		- The kubelet is not running
	I0224 14:57:20.212800   30059 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 14:57:20.212810   30059 kubeadm.go:322] 
	I0224 14:57:20.212974   30059 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 14:57:20.213026   30059 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0224 14:57:20.213074   30059 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0224 14:57:20.213086   30059 kubeadm.go:322] 
	I0224 14:57:20.213199   30059 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 14:57:20.213311   30059 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0224 14:57:20.213328   30059 kubeadm.go:322] 
	I0224 14:57:20.213441   30059 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0224 14:57:20.213499   30059 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0224 14:57:20.213587   30059 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0224 14:57:20.213611   30059 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0224 14:57:20.213616   30059 kubeadm.go:322] 
	I0224 14:57:20.216377   30059 kubeadm.go:322] W0224 22:55:24.305833    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0224 14:57:20.216521   30059 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 14:57:20.216579   30059 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 14:57:20.216676   30059 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0224 14:57:20.216753   30059 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 14:57:20.216858   30059 kubeadm.go:322] W0224 22:55:25.198301    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0224 14:57:20.216948   30059 kubeadm.go:322] W0224 22:55:25.198995    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0224 14:57:20.217015   30059 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 14:57:20.217072   30059 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0224 14:57:20.217108   30059 kubeadm.go:403] StartCluster complete in 3m54.422743929s
	I0224 14:57:20.217201   30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 14:57:20.235671   30059 logs.go:277] 0 containers: []
	W0224 14:57:20.235683   30059 logs.go:279] No container was found matching "kube-apiserver"
	I0224 14:57:20.235750   30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 14:57:20.255341   30059 logs.go:277] 0 containers: []
	W0224 14:57:20.255354   30059 logs.go:279] No container was found matching "etcd"
	I0224 14:57:20.255423   30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 14:57:20.274592   30059 logs.go:277] 0 containers: []
	W0224 14:57:20.274604   30059 logs.go:279] No container was found matching "coredns"
	I0224 14:57:20.274672   30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 14:57:20.293808   30059 logs.go:277] 0 containers: []
	W0224 14:57:20.293821   30059 logs.go:279] No container was found matching "kube-scheduler"
	I0224 14:57:20.293900   30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 14:57:20.312157   30059 logs.go:277] 0 containers: []
	W0224 14:57:20.312174   30059 logs.go:279] No container was found matching "kube-proxy"
	I0224 14:57:20.312240   30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 14:57:20.332510   30059 logs.go:277] 0 containers: []
	W0224 14:57:20.332523   30059 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 14:57:20.332599   30059 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 14:57:20.351554   30059 logs.go:277] 0 containers: []
	W0224 14:57:20.351568   30059 logs.go:279] No container was found matching "kindnet"
	I0224 14:57:20.351575   30059 logs.go:123] Gathering logs for container status ...
	I0224 14:57:20.351583   30059 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 14:57:22.400248   30059 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048615565s)
	I0224 14:57:22.400410   30059 logs.go:123] Gathering logs for kubelet ...
	I0224 14:57:22.400422   30059 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 14:57:22.440089   30059 logs.go:123] Gathering logs for dmesg ...
	I0224 14:57:22.440103   30059 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 14:57:22.454798   30059 logs.go:123] Gathering logs for describe nodes ...
	I0224 14:57:22.454812   30059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 14:57:22.509396   30059 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 14:57:22.509407   30059 logs.go:123] Gathering logs for Docker ...
	I0224 14:57:22.509414   30059 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0224 14:57:22.534365   30059 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 22:55:24.305833    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 22:55:25.198301    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 22:55:25.198995    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 14:57:22.534386   30059 out.go:239] * 
	* 
	W0224 14:57:22.534506   30059 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 22:55:24.305833    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 22:55:25.198301    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 22:55:25.198995    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 22:55:24.305833    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 22:55:25.198301    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 22:55:25.198995    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 14:57:22.534519   30059 out.go:239] * 
	* 
	W0224 14:57:22.535130   30059 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 14:57:22.621779   30059 out.go:177] 
	W0224 14:57:22.664182   30059 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 22:55:24.305833    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 22:55:25.198301    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 22:55:25.198995    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0224 22:55:24.305833    3571 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0224 22:55:25.198301    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0224 22:55:25.198995    3571 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 14:57:22.664300   30059 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 14:57:22.664367   30059 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 14:57:22.685838   30059 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-721000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (264.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (93.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-721000 addons enable ingress --alsologtostderr -v=5
E0224 14:58:37.917072   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-721000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m33.295323945s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 14:57:22.834122   30456 out.go:296] Setting OutFile to fd 1 ...
	I0224 14:57:22.834300   30456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:57:22.834305   30456 out.go:309] Setting ErrFile to fd 2...
	I0224 14:57:22.834309   30456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:57:22.834420   30456 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 14:57:22.855778   30456 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0224 14:57:22.877075   30456 config.go:182] Loaded profile config "ingress-addon-legacy-721000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0224 14:57:22.877095   30456 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-721000"
	I0224 14:57:22.877103   30456 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-721000"
	I0224 14:57:22.877400   30456 host.go:66] Checking if "ingress-addon-legacy-721000" exists ...
	I0224 14:57:22.877944   30456 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Status}}
	I0224 14:57:22.957516   30456 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0224 14:57:22.979738   30456 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0224 14:57:23.001380   30456 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0224 14:57:23.022129   30456 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0224 14:57:23.043449   30456 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0224 14:57:23.043472   30456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0224 14:57:23.043567   30456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:57:23.100575   30456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
	I0224 14:57:23.203095   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:23.255704   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:23.255745   30456 retry.go:31] will retry after 291.803196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:23.549921   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:23.605864   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:23.605880   30456 retry.go:31] will retry after 219.388053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:23.826597   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:23.880223   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:23.880238   30456 retry.go:31] will retry after 491.605343ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:24.372093   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:24.426398   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:24.426427   30456 retry.go:31] will retry after 1.166577217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:25.594748   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:25.651093   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:25.651110   30456 retry.go:31] will retry after 1.825106225s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:27.476800   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:27.530516   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:27.530532   30456 retry.go:31] will retry after 2.716358603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:30.249189   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:30.304007   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:30.304023   30456 retry.go:31] will retry after 2.951367433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:33.257655   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:33.311585   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:33.311601   30456 retry.go:31] will retry after 4.740798266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:38.054811   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:38.108371   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:38.108387   30456 retry.go:31] will retry after 3.48055274s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:41.589837   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:41.644328   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:41.644344   30456 retry.go:31] will retry after 9.036425665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:50.681090   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:57:50.734621   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:57:50.734640   30456 retry.go:31] will retry after 17.049262477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:07.786132   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:58:07.841461   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:07.856626   30456 retry.go:31] will retry after 13.458597761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:21.316664   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:58:21.372149   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:21.372167   30456 retry.go:31] will retry after 34.538009352s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:55.911078   30456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0224 14:58:55.966397   30456 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:55.966431   30456 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-721000"
	I0224 14:58:55.990259   30456 out.go:177] * Verifying ingress addon...
	I0224 14:58:56.013097   30456 out.go:177] 
	W0224 14:58:56.035223   30456 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-721000" does not exist: client config: context "ingress-addon-legacy-721000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-721000" does not exist: client config: context "ingress-addon-legacy-721000" does not exist]
	W0224 14:58:56.035256   30456 out.go:239] * 
	* 
	W0224 14:58:56.040201   30456 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 14:58:56.062025   30456 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-721000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-721000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104",
	        "Created": "2023-02-24T22:53:19.881981106Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T22:53:20.292910651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/hostname",
	        "HostsPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/hosts",
	        "LogPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104-json.log",
	        "Name": "/ingress-addon-legacy-721000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-721000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-721000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-721000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-721000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-721000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-721000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-721000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8a0d28fe38c4dba7108840e91232b9291f4ee393026f51418924bba6a002c37",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f8a0d28fe38c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-721000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b497719eb1bd",
	                        "ingress-addon-legacy-721000"
	                    ],
	                    "NetworkID": "f2abd721140e456fbb14116c7d58abd05bd7c366ab4f46b37c03e8dd515575e7",
	                    "EndpointID": "c38623805b2aa16b19511e6fae30227032d89c8575a3ca9f301423a2cc397e24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-721000 -n ingress-addon-legacy-721000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-721000 -n ingress-addon-legacy-721000: exit status 6 (390.570444ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 14:58:56.529150   30564 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-721000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-721000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (93.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (103.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-721000 addons enable ingress-dns --alsologtostderr -v=5
E0224 15:00:10.308130   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-721000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m42.6832061s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 14:58:56.584581   30576 out.go:296] Setting OutFile to fd 1 ...
	I0224 14:58:56.584752   30576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:58:56.584756   30576 out.go:309] Setting ErrFile to fd 2...
	I0224 14:58:56.584760   30576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:58:56.584881   30576 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 14:58:56.607378   30576 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0224 14:58:56.628328   30576 config.go:182] Loaded profile config "ingress-addon-legacy-721000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0224 14:58:56.628349   30576 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-721000"
	I0224 14:58:56.628361   30576 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-721000"
	I0224 14:58:56.628631   30576 host.go:66] Checking if "ingress-addon-legacy-721000" exists ...
	I0224 14:58:56.629179   30576 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-721000 --format={{.State.Status}}
	I0224 14:58:56.709010   30576 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0224 14:58:56.731170   30576 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0224 14:58:56.753079   30576 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0224 14:58:56.753123   30576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0224 14:58:56.753280   30576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-721000
	I0224 14:58:56.812553   30576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57495 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/ingress-addon-legacy-721000/id_rsa Username:docker}
	I0224 14:58:56.915394   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:58:56.968492   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:56.968531   30576 retry.go:31] will retry after 342.215972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:57.313037   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:58:57.368090   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:57.368108   30576 retry.go:31] will retry after 406.799327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:57.777228   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:58:57.832646   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:57.832662   30576 retry.go:31] will retry after 745.152344ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:58.579207   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:58:58.634173   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:58.634191   30576 retry.go:31] will retry after 957.449453ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:59.592584   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:58:59.648773   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:58:59.648790   30576 retry.go:31] will retry after 818.787132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:00.469886   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:59:00.524310   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:00.524325   30576 retry.go:31] will retry after 2.347702692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:02.874415   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:59:02.930315   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:02.930330   30576 retry.go:31] will retry after 3.54997857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:06.482608   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:59:06.537427   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:06.537442   30576 retry.go:31] will retry after 4.291965067s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:10.829754   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:59:10.883431   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:10.883449   30576 retry.go:31] will retry after 4.288036418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:15.171822   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:59:15.226496   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:15.226516   30576 retry.go:31] will retry after 9.429671251s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:24.658614   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:59:24.713559   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:24.713574   30576 retry.go:31] will retry after 18.735880165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:43.452055   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 14:59:43.507081   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 14:59:43.507097   30576 retry.go:31] will retry after 24.023423862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 15:00:07.531848   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 15:00:07.585860   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 15:00:07.585875   30576 retry.go:31] will retry after 31.489326325s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 15:00:39.076886   30576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0224 15:00:39.131442   30576 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0224 15:00:39.153308   30576 out.go:177] 
	W0224 15:00:39.175324   30576 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0224 15:00:39.175348   30576 out.go:239] * 
	* 
	W0224 15:00:39.180336   30576 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 15:00:39.202032   30576 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-721000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-721000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104",
	        "Created": "2023-02-24T22:53:19.881981106Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T22:53:20.292910651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/hostname",
	        "HostsPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/hosts",
	        "LogPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104-json.log",
	        "Name": "/ingress-addon-legacy-721000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-721000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-721000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-721000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-721000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-721000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-721000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-721000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8a0d28fe38c4dba7108840e91232b9291f4ee393026f51418924bba6a002c37",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f8a0d28fe38c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-721000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b497719eb1bd",
	                        "ingress-addon-legacy-721000"
	                    ],
	                    "NetworkID": "f2abd721140e456fbb14116c7d58abd05bd7c366ab4f46b37c03e8dd515575e7",
	                    "EndpointID": "c38623805b2aa16b19511e6fae30227032d89c8575a3ca9f301423a2cc397e24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-721000 -n ingress-addon-legacy-721000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-721000 -n ingress-addon-legacy-721000: exit status 6 (394.829748ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:00:39.668976   30706 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-721000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-721000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (103.14s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.49s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-721000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-721000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104",
	        "Created": "2023-02-24T22:53:19.881981106Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431755,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T22:53:20.292910651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/hostname",
	        "HostsPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/hosts",
	        "LogPath": "/var/lib/docker/containers/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104/b497719eb1bd0868234a535d581f44fb4651d61131c0e288b6af758e1ba29104-json.log",
	        "Name": "/ingress-addon-legacy-721000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-721000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-721000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/362b4b04fa11f94ea3b6c9d133b6329c72ff56afa10ae2dc8075f406bd491a76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-721000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-721000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-721000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-721000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-721000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8a0d28fe38c4dba7108840e91232b9291f4ee393026f51418924bba6a002c37",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57499"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f8a0d28fe38c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-721000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b497719eb1bd",
	                        "ingress-addon-legacy-721000"
	                    ],
	                    "NetworkID": "f2abd721140e456fbb14116c7d58abd05bd7c366ab4f46b37c03e8dd515575e7",
	                    "EndpointID": "c38623805b2aa16b19511e6fae30227032d89c8575a3ca9f301423a2cc397e24",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-721000 -n ingress-addon-legacy-721000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-721000 -n ingress-addon-legacy-721000: exit status 6 (428.293669ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:00:40.154741   30718 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-721000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-721000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (11.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-358000 -- rollout status deployment/busybox: (6.440771252s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1, output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:503: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-5zqv7 -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-5zqv7 -- nslookup kubernetes.io: exit status 1 (157.550976ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-5zqv7 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-tnqbs -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-5zqv7 -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-5zqv7 -- nslookup kubernetes.default: exit status 1 (153.979796ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-5zqv7 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-tnqbs -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-5zqv7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-5zqv7 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (154.36338ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-5zqv7 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-tnqbs -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-358000
helpers_test.go:235: (dbg) docker inspect multinode-358000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8",
	        "Created": "2023-02-24T23:06:00.811874367Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475219,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:06:01.100784819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8/hosts",
	        "LogPath": "/var/lib/docker/containers/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8-json.log",
	        "Name": "/multinode-358000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-358000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-358000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/01af4a4fa106522bbb3042ade1c0f83ad8608557aa6739ee9122a49a0e2e002f-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/01af4a4fa106522bbb3042ade1c0f83ad8608557aa6739ee9122a49a0e2e002f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/01af4a4fa106522bbb3042ade1c0f83ad8608557aa6739ee9122a49a0e2e002f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/01af4a4fa106522bbb3042ade1c0f83ad8608557aa6739ee9122a49a0e2e002f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-358000",
	                "Source": "/var/lib/docker/volumes/multinode-358000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-358000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-358000",
	                "name.minikube.sigs.k8s.io": "multinode-358000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b870af0eff0f496d738300826c27df29ab50762fd500f6d77e77cfc70c35ff37",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58096"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58093"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b870af0eff0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-358000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a49b2d313bc",
	                        "multinode-358000"
	                    ],
	                    "NetworkID": "0c9844f869c1c112c7c27c3cf5d33f464f5933c29bc5fe8a123a6550e7d34275",
	                    "EndpointID": "3676bd97086239e08187252245de2e154436a8a683baf61dbaf7da73343aabaa",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-358000 -n multinode-358000
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-358000 logs -n 25: (2.428097495s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-290000                                  | second-290000        | jenkins | v1.29.0 | 24 Feb 23 15:04 PST | 24 Feb 23 15:05 PST |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| delete  | -p second-290000                                  | second-290000        | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| delete  | -p first-289000                                   | first-289000         | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| start   | -p mount-start-1-857000                           | mount-start-1-857000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-1-857000 ssh -- ls                    | mount-start-1-857000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-870000                           | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-870000 ssh -- ls                    | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-857000                           | mount-start-1-857000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-870000 ssh -- ls                    | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-870000                           | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| start   | -p mount-start-2-870000                           | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| ssh     | mount-start-2-870000 ssh -- ls                    | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-870000                           | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| delete  | -p mount-start-1-857000                           | mount-start-1-857000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| start   | -p multinode-358000                               | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:07 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- apply -f                   | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- rollout                    | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- get pods -o                | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- get pods -o                | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST |                     |
	|         | busybox-6b86dd6d48-5zqv7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | busybox-6b86dd6d48-tnqbs --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST |                     |
	|         | busybox-6b86dd6d48-5zqv7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | busybox-6b86dd6d48-tnqbs --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST |                     |
	|         | busybox-6b86dd6d48-5zqv7 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | busybox-6b86dd6d48-tnqbs -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 15:05:52
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 15:05:52.700078   32699 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:05:52.700243   32699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:05:52.700248   32699 out.go:309] Setting ErrFile to fd 2...
	I0224 15:05:52.700251   32699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:05:52.700359   32699 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:05:52.701724   32699 out.go:303] Setting JSON to false
	I0224 15:05:52.719942   32699 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7526,"bootTime":1677272426,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 15:05:52.720068   32699 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 15:05:52.742122   32699 out.go:177] * [multinode-358000] minikube v1.29.0 on Darwin 13.2.1
	I0224 15:05:52.785190   32699 notify.go:220] Checking for updates...
	I0224 15:05:52.807183   32699 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 15:05:52.829307   32699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:05:52.851060   32699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 15:05:52.872078   32699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 15:05:52.893322   32699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 15:05:52.915118   32699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 15:05:52.936298   32699 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 15:05:52.998124   32699 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 15:05:52.998262   32699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:05:53.140490   32699 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 23:05:53.047731623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:05:53.162589   32699 out.go:177] * Using the docker driver based on user configuration
	I0224 15:05:53.184065   32699 start.go:296] selected driver: docker
	I0224 15:05:53.184098   32699 start.go:857] validating driver "docker" against <nil>
	I0224 15:05:53.184117   32699 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 15:05:53.188041   32699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:05:53.329111   32699 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 23:05:53.236765216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:05:53.329243   32699 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 15:05:53.329418   32699 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 15:05:53.351348   32699 out.go:177] * Using Docker Desktop driver with root privileges
	I0224 15:05:53.372891   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:05:53.372974   32699 cni.go:136] 0 nodes found, recommending kindnet
	I0224 15:05:53.372990   32699 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0224 15:05:53.373007   32699 start_flags.go:319] config:
	{Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:05:53.415868   32699 out.go:177] * Starting control plane node multinode-358000 in cluster multinode-358000
	I0224 15:05:53.437132   32699 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:05:53.458831   32699 out.go:177] * Pulling base image ...
	I0224 15:05:53.501132   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:05:53.501193   32699 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:05:53.501240   32699 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 15:05:53.501262   32699 cache.go:57] Caching tarball of preloaded images
	I0224 15:05:53.501489   32699 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:05:53.501508   32699 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 15:05:53.503803   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:05:53.503859   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json: {Name:mka69897b551e7928bc6b44fce9cad263e070669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:05:53.577571   32699 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:05:53.577615   32699 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:05:53.577682   32699 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:05:53.577742   32699 start.go:364] acquiring machines lock for multinode-358000: {Name:mk212d26ea22c7f1fb6b8f9cd0233a6686bc192d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:05:53.577973   32699 start.go:368] acquired machines lock for "multinode-358000" in 212.735µs
	I0224 15:05:53.578014   32699 start.go:93] Provisioning new machine with config: &{Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 15:05:53.578135   32699 start.go:125] createHost starting for "" (driver="docker")
	I0224 15:05:53.621823   32699 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 15:05:53.622290   32699 start.go:159] libmachine.API.Create for "multinode-358000" (driver="docker")
	I0224 15:05:53.622354   32699 client.go:168] LocalClient.Create starting
	I0224 15:05:53.622646   32699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
	I0224 15:05:53.622777   32699 main.go:141] libmachine: Decoding PEM data...
	I0224 15:05:53.622828   32699 main.go:141] libmachine: Parsing certificate...
	I0224 15:05:53.622978   32699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
	I0224 15:05:53.623064   32699 main.go:141] libmachine: Decoding PEM data...
	I0224 15:05:53.623082   32699 main.go:141] libmachine: Parsing certificate...
	I0224 15:05:53.624014   32699 cli_runner.go:164] Run: docker network inspect multinode-358000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 15:05:53.684011   32699 cli_runner.go:211] docker network inspect multinode-358000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 15:05:53.684109   32699 network_create.go:281] running [docker network inspect multinode-358000] to gather additional debugging logs...
	I0224 15:05:53.684127   32699 cli_runner.go:164] Run: docker network inspect multinode-358000
	W0224 15:05:53.739375   32699 cli_runner.go:211] docker network inspect multinode-358000 returned with exit code 1
	I0224 15:05:53.739405   32699 network_create.go:284] error running [docker network inspect multinode-358000]: docker network inspect multinode-358000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-358000
	I0224 15:05:53.739418   32699 network_create.go:286] output of [docker network inspect multinode-358000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-358000
	
	** /stderr **
	I0224 15:05:53.739519   32699 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 15:05:53.799037   32699 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 15:05:53.799354   32699 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00129b990}
	I0224 15:05:53.799367   32699 network_create.go:123] attempt to create docker network multinode-358000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0224 15:05:53.799437   32699 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-358000 multinode-358000
	I0224 15:05:53.890418   32699 network_create.go:107] docker network multinode-358000 192.168.58.0/24 created
	I0224 15:05:53.890457   32699 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-358000" container
	I0224 15:05:53.890580   32699 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 15:05:53.946803   32699 cli_runner.go:164] Run: docker volume create multinode-358000 --label name.minikube.sigs.k8s.io=multinode-358000 --label created_by.minikube.sigs.k8s.io=true
	I0224 15:05:54.004323   32699 oci.go:103] Successfully created a docker volume multinode-358000
	I0224 15:05:54.004453   32699 cli_runner.go:164] Run: docker run --rm --name multinode-358000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-358000 --entrypoint /usr/bin/test -v multinode-358000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 15:05:54.456252   32699 oci.go:107] Successfully prepared a docker volume multinode-358000
	I0224 15:05:54.456288   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:05:54.456302   32699 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 15:05:54.456407   32699 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-358000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 15:06:00.615283   32699 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-358000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.158620865s)
	I0224 15:06:00.615308   32699 kic.go:199] duration metric: took 6.158821 seconds to extract preloaded images to volume
	I0224 15:06:00.615424   32699 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 15:06:00.757047   32699 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-358000 --name multinode-358000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-358000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-358000 --network multinode-358000 --ip 192.168.58.2 --volume multinode-358000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 15:06:01.109529   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Running}}
	I0224 15:06:01.172749   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:01.240155   32699 cli_runner.go:164] Run: docker exec multinode-358000 stat /var/lib/dpkg/alternatives/iptables
	I0224 15:06:01.352021   32699 oci.go:144] the created container "multinode-358000" has a running status.
	I0224 15:06:01.352058   32699 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa...
	I0224 15:06:01.599598   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 15:06:01.599677   32699 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 15:06:01.704310   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:01.760772   32699 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 15:06:01.760792   32699 kic_runner.go:114] Args: [docker exec --privileged multinode-358000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 15:06:01.861668   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:01.918839   32699 machine.go:88] provisioning docker machine ...
	I0224 15:06:01.918883   32699 ubuntu.go:169] provisioning hostname "multinode-358000"
	I0224 15:06:01.918988   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:01.976468   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:01.976850   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:01.976864   32699 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-358000 && echo "multinode-358000" | sudo tee /etc/hostname
	I0224 15:06:02.120887   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-358000
	
	I0224 15:06:02.120963   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:02.178550   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:02.178931   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:02.178944   32699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-358000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-358000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-358000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:06:02.314857   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:06:02.314883   32699 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:06:02.314902   32699 ubuntu.go:177] setting up certificates
	I0224 15:06:02.314909   32699 provision.go:83] configureAuth start
	I0224 15:06:02.315000   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000
	I0224 15:06:02.371002   32699 provision.go:138] copyHostCerts
	I0224 15:06:02.371047   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:06:02.371106   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:06:02.371114   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:06:02.371235   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:06:02.371403   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:06:02.371434   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:06:02.371439   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:06:02.371505   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:06:02.371612   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:06:02.371651   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:06:02.371655   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:06:02.371717   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:06:02.371826   32699 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.multinode-358000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-358000]
	I0224 15:06:02.441506   32699 provision.go:172] copyRemoteCerts
	I0224 15:06:02.441562   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:06:02.441619   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:02.498703   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:02.594945   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 15:06:02.595046   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:06:02.612524   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 15:06:02.612584   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0224 15:06:02.629650   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 15:06:02.629727   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 15:06:02.647276   32699 provision.go:86] duration metric: configureAuth took 332.338688ms
	I0224 15:06:02.647290   32699 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:06:02.647451   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:06:02.647516   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:02.734568   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:02.735051   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:02.735072   32699 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:06:02.871688   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:06:02.871707   32699 ubuntu.go:71] root file system type: overlay
	I0224 15:06:02.871790   32699 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:06:02.871877   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:02.929413   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:02.929798   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:02.929847   32699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:06:03.072077   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:06:03.072164   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:03.128700   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:03.129050   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:03.129063   32699 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:06:03.745599   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:06:03.070159446 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 15:06:03.745620   32699 machine.go:91] provisioned docker machine in 1.826705767s
	I0224 15:06:03.745626   32699 client.go:171] LocalClient.Create took 10.12295837s
	I0224 15:06:03.745643   32699 start.go:167] duration metric: libmachine.API.Create for "multinode-358000" took 10.123054882s
	I0224 15:06:03.745652   32699 start.go:300] post-start starting for "multinode-358000" (driver="docker")
	I0224 15:06:03.745657   32699 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:06:03.745745   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:06:03.745797   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:03.805311   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:03.902292   32699 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:06:03.905822   32699 command_runner.go:130] > NAME="Ubuntu"
	I0224 15:06:03.905837   32699 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0224 15:06:03.905842   32699 command_runner.go:130] > ID=ubuntu
	I0224 15:06:03.905849   32699 command_runner.go:130] > ID_LIKE=debian
	I0224 15:06:03.905855   32699 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0224 15:06:03.905859   32699 command_runner.go:130] > VERSION_ID="20.04"
	I0224 15:06:03.905865   32699 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0224 15:06:03.905873   32699 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0224 15:06:03.905878   32699 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0224 15:06:03.905887   32699 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0224 15:06:03.905891   32699 command_runner.go:130] > VERSION_CODENAME=focal
	I0224 15:06:03.905895   32699 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0224 15:06:03.905943   32699 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:06:03.905957   32699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:06:03.905965   32699 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:06:03.905969   32699 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:06:03.905980   32699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:06:03.906085   32699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:06:03.906259   32699 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:06:03.906265   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /etc/ssl/certs/268712.pem
	I0224 15:06:03.906453   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:06:03.913757   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:06:03.930925   32699 start.go:303] post-start completed in 185.257841ms
	I0224 15:06:03.931463   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000
	I0224 15:06:03.987950   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:06:03.988379   32699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:06:03.988438   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:04.044302   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:04.136580   32699 command_runner.go:130] > 5%!
	(MISSING)I0224 15:06:04.136658   32699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:06:04.141104   32699 command_runner.go:130] > 93G
	I0224 15:06:04.141412   32699 start.go:128] duration metric: createHost completed in 10.562952813s
	I0224 15:06:04.141426   32699 start.go:83] releasing machines lock for "multinode-358000", held for 10.563126002s
	I0224 15:06:04.141536   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000
	I0224 15:06:04.197175   32699 ssh_runner.go:195] Run: cat /version.json
	I0224 15:06:04.197187   32699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 15:06:04.197240   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:04.197262   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:04.257562   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:04.257707   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:04.400238   32699 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 15:06:04.401751   32699 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0224 15:06:04.401883   32699 ssh_runner.go:195] Run: systemctl --version
	I0224 15:06:04.406448   32699 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0224 15:06:04.406470   32699 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0224 15:06:04.406821   32699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 15:06:04.411614   32699 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0224 15:06:04.411623   32699 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0224 15:06:04.411628   32699 command_runner.go:130] > Device: a6h/166d	Inode: 2885207     Links: 1
	I0224 15:06:04.411636   32699 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 15:06:04.411646   32699 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0224 15:06:04.411650   32699 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0224 15:06:04.411654   32699 command_runner.go:130] > Change: 2023-02-24 22:41:56.862825099 +0000
	I0224 15:06:04.411658   32699 command_runner.go:130] >  Birth: -
	I0224 15:06:04.411997   32699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 15:06:04.431940   32699 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 15:06:04.432008   32699 ssh_runner.go:195] Run: which cri-dockerd
	I0224 15:06:04.435777   32699 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 15:06:04.435992   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 15:06:04.443243   32699 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 15:06:04.456172   32699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 15:06:04.470818   32699 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0224 15:06:04.470854   32699 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 15:06:04.470865   32699 start.go:485] detecting cgroup driver to use...
	I0224 15:06:04.470875   32699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:06:04.470951   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:06:04.483120   32699 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 15:06:04.483136   32699 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 15:06:04.483918   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 15:06:04.492451   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:06:04.500930   32699 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:06:04.500993   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:06:04.509362   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:06:04.517801   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:06:04.526164   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:06:04.534551   32699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:06:04.542391   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:06:04.550807   32699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:06:04.557252   32699 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 15:06:04.557968   32699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:06:04.565084   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:06:04.629317   32699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:06:04.701330   32699 start.go:485] detecting cgroup driver to use...
	I0224 15:06:04.701349   32699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:06:04.701410   32699 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:06:04.710823   32699 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0224 15:06:04.710953   32699 command_runner.go:130] > [Unit]
	I0224 15:06:04.710964   32699 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 15:06:04.710971   32699 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 15:06:04.710977   32699 command_runner.go:130] > BindsTo=containerd.service
	I0224 15:06:04.710982   32699 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0224 15:06:04.710986   32699 command_runner.go:130] > Wants=network-online.target
	I0224 15:06:04.710992   32699 command_runner.go:130] > Requires=docker.socket
	I0224 15:06:04.710995   32699 command_runner.go:130] > StartLimitBurst=3
	I0224 15:06:04.711000   32699 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 15:06:04.711009   32699 command_runner.go:130] > [Service]
	I0224 15:06:04.711013   32699 command_runner.go:130] > Type=notify
	I0224 15:06:04.711016   32699 command_runner.go:130] > Restart=on-failure
	I0224 15:06:04.711022   32699 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 15:06:04.711047   32699 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 15:06:04.711053   32699 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 15:06:04.711059   32699 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 15:06:04.711065   32699 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 15:06:04.711070   32699 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 15:06:04.711076   32699 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 15:06:04.711089   32699 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 15:06:04.711095   32699 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 15:06:04.711102   32699 command_runner.go:130] > ExecStart=
	I0224 15:06:04.711115   32699 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0224 15:06:04.711120   32699 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 15:06:04.711125   32699 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 15:06:04.711132   32699 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 15:06:04.711136   32699 command_runner.go:130] > LimitNOFILE=infinity
	I0224 15:06:04.711139   32699 command_runner.go:130] > LimitNPROC=infinity
	I0224 15:06:04.711143   32699 command_runner.go:130] > LimitCORE=infinity
	I0224 15:06:04.711148   32699 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 15:06:04.711152   32699 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 15:06:04.711156   32699 command_runner.go:130] > TasksMax=infinity
	I0224 15:06:04.711159   32699 command_runner.go:130] > TimeoutStartSec=0
	I0224 15:06:04.711164   32699 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 15:06:04.711169   32699 command_runner.go:130] > Delegate=yes
	I0224 15:06:04.711173   32699 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 15:06:04.711177   32699 command_runner.go:130] > KillMode=process
	I0224 15:06:04.711185   32699 command_runner.go:130] > [Install]
	I0224 15:06:04.711190   32699 command_runner.go:130] > WantedBy=multi-user.target
	I0224 15:06:04.711753   32699 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:06:04.711819   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:06:04.722696   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:06:04.736120   32699 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 15:06:04.736140   32699 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 15:06:04.736894   32699 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:06:04.810788   32699 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:06:04.901744   32699 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:06:04.901763   32699 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:06:04.915707   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:06:05.014497   32699 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:06:05.237783   32699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:06:05.310898   32699 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0224 15:06:05.310967   32699 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 15:06:05.385561   32699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:06:05.455106   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:06:05.523153   32699 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 15:06:05.534415   32699 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 15:06:05.534505   32699 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 15:06:05.538376   32699 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 15:06:05.538389   32699 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 15:06:05.538394   32699 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0224 15:06:05.538399   32699 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0224 15:06:05.538405   32699 command_runner.go:130] > Access: 2023-02-24 23:06:05.530159588 +0000
	I0224 15:06:05.538409   32699 command_runner.go:130] > Modify: 2023-02-24 23:06:05.530159588 +0000
	I0224 15:06:05.538415   32699 command_runner.go:130] > Change: 2023-02-24 23:06:05.531159588 +0000
	I0224 15:06:05.538426   32699 command_runner.go:130] >  Birth: -
	I0224 15:06:05.538523   32699 start.go:553] Will wait 60s for crictl version
	I0224 15:06:05.538570   32699 ssh_runner.go:195] Run: which crictl
	I0224 15:06:05.542039   32699 command_runner.go:130] > /usr/bin/crictl
	I0224 15:06:05.542186   32699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 15:06:05.641216   32699 command_runner.go:130] > Version:  0.1.0
	I0224 15:06:05.641227   32699 command_runner.go:130] > RuntimeName:  docker
	I0224 15:06:05.641231   32699 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0224 15:06:05.641236   32699 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 15:06:05.643151   32699 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 15:06:05.643224   32699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:06:05.666059   32699 command_runner.go:130] > 23.0.1
	I0224 15:06:05.668045   32699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:06:05.691085   32699 command_runner.go:130] > 23.0.1
	I0224 15:06:05.736151   32699 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 15:06:05.736357   32699 cli_runner.go:164] Run: docker exec -t multinode-358000 dig +short host.docker.internal
	I0224 15:06:05.847649   32699 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:06:05.847759   32699 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:06:05.852257   32699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:06:05.862318   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:05.919094   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:06:05.919180   32699 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:06:05.938574   32699 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 15:06:05.938594   32699 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 15:06:05.938599   32699 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 15:06:05.938606   32699 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 15:06:05.938611   32699 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 15:06:05.938616   32699 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 15:06:05.938620   32699 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 15:06:05.938626   32699 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 15:06:05.940094   32699 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 15:06:05.940108   32699 docker.go:560] Images already preloaded, skipping extraction
	I0224 15:06:05.940179   32699 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:06:05.959670   32699 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 15:06:05.959683   32699 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 15:06:05.959688   32699 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 15:06:05.959698   32699 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 15:06:05.959705   32699 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 15:06:05.959713   32699 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 15:06:05.959718   32699 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 15:06:05.959728   32699 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 15:06:05.961484   32699 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 15:06:05.961497   32699 cache_images.go:84] Images are preloaded, skipping loading
	I0224 15:06:05.961590   32699 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:06:05.985614   32699 command_runner.go:130] > cgroupfs
	I0224 15:06:05.987372   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:06:05.987384   32699 cni.go:136] 1 nodes found, recommending kindnet
	I0224 15:06:05.987403   32699 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:06:05.987421   32699 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-358000 NodeName:multinode-358000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:06:05.987534   32699 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-358000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:06:05.987610   32699 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-358000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:06:05.987689   32699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 15:06:05.995295   32699 command_runner.go:130] > kubeadm
	I0224 15:06:05.995306   32699 command_runner.go:130] > kubectl
	I0224 15:06:05.995311   32699 command_runner.go:130] > kubelet
	I0224 15:06:05.996200   32699 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:06:05.996290   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 15:06:06.004706   32699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0224 15:06:06.017726   32699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:06:06.030758   32699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0224 15:06:06.043886   32699 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:06:06.047647   32699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:06:06.057556   32699 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000 for IP: 192.168.58.2
	I0224 15:06:06.057575   32699 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.057764   32699 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:06:06.057832   32699 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:06:06.057876   32699 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key
	I0224 15:06:06.057890   32699 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt with IP's: []
	I0224 15:06:06.218063   32699 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt ...
	I0224 15:06:06.218072   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt: {Name:mkf9646423d8a5efec8e5fc88a77aa92f40ab15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.218366   32699 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key ...
	I0224 15:06:06.218373   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key: {Name:mk2a0c1a353142ca931c8656aa00ef7eeee445a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.218568   32699 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key.cee25041
	I0224 15:06:06.218584   32699 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 15:06:06.255271   32699 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt.cee25041 ...
	I0224 15:06:06.255280   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt.cee25041: {Name:mkfa6679948f9a1b5bdbbd6c85f67c8f1bb24f07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.255539   32699 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key.cee25041 ...
	I0224 15:06:06.255550   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key.cee25041: {Name:mkd0d79de3148e3834748981d65e43b5337ab740 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.255768   32699 certs.go:333] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt
	I0224 15:06:06.255975   32699 certs.go:337] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key
	I0224 15:06:06.256184   32699 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key
	I0224 15:06:06.256197   32699 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt with IP's: []
	I0224 15:06:06.557247   32699 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt ...
	I0224 15:06:06.557266   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt: {Name:mk6c0746cc6f68aa1e42c1925a73aab63483dd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.557559   32699 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key ...
	I0224 15:06:06.557578   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key: {Name:mk07b966efb39d3ac2ad033ba22960f8ad80f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.557813   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0224 15:06:06.557846   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0224 15:06:06.557894   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0224 15:06:06.557932   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0224 15:06:06.557952   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 15:06:06.557974   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 15:06:06.557992   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 15:06:06.558010   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 15:06:06.558105   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:06:06.558154   32699 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:06:06.558165   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:06:06.558202   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:06:06.558235   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:06:06.558267   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:06:06.558334   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:06:06.558368   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.558391   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.558409   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem -> /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.558906   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 15:06:06.577898   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 15:06:06.594905   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 15:06:06.611993   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 15:06:06.629312   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:06:06.646256   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:06:06.663331   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:06:06.680653   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:06:06.698206   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:06:06.715800   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:06:06.733019   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:06:06.750170   32699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 15:06:06.763103   32699 ssh_runner.go:195] Run: openssl version
	I0224 15:06:06.768175   32699 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0224 15:06:06.768504   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:06:06.776582   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.780496   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.780635   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.780676   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.785834   32699 command_runner.go:130] > 3ec20f2e
	I0224 15:06:06.786247   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:06:06.794393   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:06:06.802362   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.806286   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.806318   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.806367   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.811484   32699 command_runner.go:130] > b5213941
	I0224 15:06:06.811883   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:06:06.820100   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:06:06.828204   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.832234   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.832390   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.832436   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.837521   32699 command_runner.go:130] > 51391683
	I0224 15:06:06.837773   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:06:06.845990   32699 kubeadm.go:401] StartCluster: {Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:06:06.846099   32699 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:06:06.865390   32699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 15:06:06.873234   32699 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0224 15:06:06.873245   32699 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0224 15:06:06.873254   32699 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0224 15:06:06.873313   32699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:06:06.880903   32699 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:06:06.880953   32699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:06:06.888270   32699 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0224 15:06:06.888283   32699 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0224 15:06:06.888295   32699 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0224 15:06:06.888308   32699 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:06:06.888339   32699 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:06:06.888363   32699 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:06:06.937859   32699 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0224 15:06:06.937870   32699 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0224 15:06:06.938198   32699 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:06:06.938210   32699 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 15:06:07.046825   32699 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:06:07.046862   32699 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:06:07.047006   32699 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:06:07.047014   32699 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:06:07.047128   32699 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:06:07.047135   32699 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:06:07.179277   32699 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:06:07.179291   32699 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:06:07.221486   32699 out.go:204]   - Generating certificates and keys ...
	I0224 15:06:07.221624   32699 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0224 15:06:07.221650   32699 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:06:07.221778   32699 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0224 15:06:07.221785   32699 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:06:07.309934   32699 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 15:06:07.309947   32699 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 15:06:07.498825   32699 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 15:06:07.498842   32699 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0224 15:06:07.761954   32699 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 15:06:07.761962   32699 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0224 15:06:07.884540   32699 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 15:06:07.884595   32699 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0224 15:06:08.148035   32699 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 15:06:08.148051   32699 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0224 15:06:08.148147   32699 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-358000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 15:06:08.148152   32699 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-358000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 15:06:08.255213   32699 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 15:06:08.255226   32699 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0224 15:06:08.255438   32699 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-358000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 15:06:08.255443   32699 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-358000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 15:06:08.437384   32699 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 15:06:08.437399   32699 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 15:06:08.545210   32699 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 15:06:08.545220   32699 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 15:06:08.589595   32699 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 15:06:08.589599   32699 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0224 15:06:08.589653   32699 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:06:08.589660   32699 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:06:08.925094   32699 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:06:08.925119   32699 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:06:09.012936   32699 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:06:09.012968   32699 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:06:09.243341   32699 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:06:09.243356   32699 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:06:09.383002   32699 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:06:09.383017   32699 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:06:09.393734   32699 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:06:09.393740   32699 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:06:09.394431   32699 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:06:09.394439   32699 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:06:09.394474   32699 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 15:06:09.394483   32699 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 15:06:09.463550   32699 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:06:09.463554   32699 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:06:09.484938   32699 out.go:204]   - Booting up control plane ...
	I0224 15:06:09.485014   32699 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:06:09.485028   32699 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:06:09.485100   32699 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:06:09.485107   32699 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:06:09.485167   32699 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:06:09.485179   32699 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:06:09.485248   32699 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:06:09.485255   32699 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:06:09.485396   32699 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:06:09.485398   32699 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:06:18.970599   32699 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502052 seconds
	I0224 15:06:18.970617   32699 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.502052 seconds
	I0224 15:06:18.970735   32699 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 15:06:18.970745   32699 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 15:06:18.978671   32699 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 15:06:18.978699   32699 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 15:06:19.494147   32699 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 15:06:19.494183   32699 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0224 15:06:19.494397   32699 kubeadm.go:322] [mark-control-plane] Marking the node multinode-358000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 15:06:19.494409   32699 command_runner.go:130] > [mark-control-plane] Marking the node multinode-358000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 15:06:20.002931   32699 kubeadm.go:322] [bootstrap-token] Using token: 5c43ss.8mz735kcmuqfeuba
	I0224 15:06:20.002945   32699 command_runner.go:130] > [bootstrap-token] Using token: 5c43ss.8mz735kcmuqfeuba
	I0224 15:06:20.025718   32699 out.go:204]   - Configuring RBAC rules ...
	I0224 15:06:20.025833   32699 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 15:06:20.025845   32699 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 15:06:20.141758   32699 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 15:06:20.141774   32699 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 15:06:20.147058   32699 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 15:06:20.147068   32699 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 15:06:20.149662   32699 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 15:06:20.149668   32699 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 15:06:20.151979   32699 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 15:06:20.151988   32699 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 15:06:20.154008   32699 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 15:06:20.154025   32699 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 15:06:20.162198   32699 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 15:06:20.162204   32699 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 15:06:20.319035   32699 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0224 15:06:20.319039   32699 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0224 15:06:20.559257   32699 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0224 15:06:20.559289   32699 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0224 15:06:20.560060   32699 kubeadm.go:322] 
	I0224 15:06:20.560138   32699 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0224 15:06:20.560165   32699 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0224 15:06:20.560182   32699 kubeadm.go:322] 
	I0224 15:06:20.560269   32699 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0224 15:06:20.560285   32699 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0224 15:06:20.560300   32699 kubeadm.go:322] 
	I0224 15:06:20.560334   32699 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0224 15:06:20.560350   32699 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0224 15:06:20.560444   32699 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 15:06:20.560450   32699 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 15:06:20.560529   32699 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 15:06:20.560539   32699 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 15:06:20.560544   32699 kubeadm.go:322] 
	I0224 15:06:20.560587   32699 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0224 15:06:20.560599   32699 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0224 15:06:20.560617   32699 kubeadm.go:322] 
	I0224 15:06:20.560682   32699 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 15:06:20.560695   32699 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 15:06:20.560713   32699 kubeadm.go:322] 
	I0224 15:06:20.560790   32699 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0224 15:06:20.560799   32699 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0224 15:06:20.560879   32699 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 15:06:20.560889   32699 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 15:06:20.560988   32699 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 15:06:20.560996   32699 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 15:06:20.561002   32699 kubeadm.go:322] 
	I0224 15:06:20.561104   32699 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 15:06:20.561118   32699 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0224 15:06:20.561181   32699 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0224 15:06:20.561187   32699 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0224 15:06:20.561192   32699 kubeadm.go:322] 
	I0224 15:06:20.561276   32699 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5c43ss.8mz735kcmuqfeuba \
	I0224 15:06:20.561289   32699 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 5c43ss.8mz735kcmuqfeuba \
	I0224 15:06:20.561386   32699 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e \
	I0224 15:06:20.561395   32699 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e \
	I0224 15:06:20.561419   32699 kubeadm.go:322] 	--control-plane 
	I0224 15:06:20.561424   32699 command_runner.go:130] > 	--control-plane 
	I0224 15:06:20.561430   32699 kubeadm.go:322] 
	I0224 15:06:20.561492   32699 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0224 15:06:20.561496   32699 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0224 15:06:20.561498   32699 kubeadm.go:322] 
	I0224 15:06:20.561613   32699 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5c43ss.8mz735kcmuqfeuba \
	I0224 15:06:20.561620   32699 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5c43ss.8mz735kcmuqfeuba \
	I0224 15:06:20.561693   32699 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e 
	I0224 15:06:20.561697   32699 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e 
	I0224 15:06:20.566443   32699 kubeadm.go:322] W0224 23:06:06.929709    1301 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 15:06:20.566460   32699 command_runner.go:130] ! W0224 23:06:06.929709    1301 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 15:06:20.566627   32699 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0224 15:06:20.566638   32699 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0224 15:06:20.566768   32699 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:06:20.566778   32699 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:06:20.566802   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:06:20.566817   32699 cni.go:136] 1 nodes found, recommending kindnet
	I0224 15:06:20.606381   32699 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0224 15:06:20.628536   32699 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 15:06:20.656094   32699 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 15:06:20.656117   32699 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0224 15:06:20.656126   32699 command_runner.go:130] > Device: a6h/166d	Inode: 2757559     Links: 1
	I0224 15:06:20.656135   32699 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 15:06:20.656144   32699 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0224 15:06:20.656151   32699 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0224 15:06:20.656172   32699 command_runner.go:130] > Change: 2023-02-24 22:41:56.035825051 +0000
	I0224 15:06:20.656177   32699 command_runner.go:130] >  Birth: -
	I0224 15:06:20.656261   32699 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 15:06:20.656270   32699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 15:06:20.670756   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 15:06:21.280785   32699 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0224 15:06:21.285685   32699 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0224 15:06:21.291793   32699 command_runner.go:130] > serviceaccount/kindnet created
	I0224 15:06:21.299522   32699 command_runner.go:130] > daemonset.apps/kindnet created
	I0224 15:06:21.305563   32699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 15:06:21.305650   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:21.305648   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=08976559d74fb9c2654733dc21cb8f9d9ec24374 minikube.k8s.io/name=multinode-358000 minikube.k8s.io/updated_at=2023_02_24T15_06_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:21.313911   32699 command_runner.go:130] > -16
	I0224 15:06:21.313936   32699 ops.go:34] apiserver oom_adj: -16
	I0224 15:06:21.388520   32699 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0224 15:06:21.411045   32699 command_runner.go:130] > node/multinode-358000 labeled
	I0224 15:06:21.411099   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:21.505089   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:22.006407   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:22.066690   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:22.507343   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:22.569646   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:23.006632   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:23.071130   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:23.506590   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:23.569176   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:24.006544   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:24.071511   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:24.506502   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:24.570330   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:25.007555   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:25.073387   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:25.506555   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:25.573121   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:26.006564   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:26.072268   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:26.507511   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:26.571578   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:27.007397   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:27.071721   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:27.506676   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:27.571090   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:28.006648   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:28.070884   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:28.507586   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:28.568767   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:29.006654   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:29.067325   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:29.506706   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:29.574296   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:30.006789   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:30.072674   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:30.507377   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:30.570895   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:31.006779   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:31.073349   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:31.507721   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:31.573940   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:32.006759   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:32.073746   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:32.507329   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:32.573734   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:33.007394   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:33.069741   32699 command_runner.go:130] > NAME      SECRETS   AGE
	I0224 15:06:33.069753   32699 command_runner.go:130] > default   0         1s
	I0224 15:06:33.073484   32699 kubeadm.go:1073] duration metric: took 11.767553662s to wait for elevateKubeSystemPrivileges.
	I0224 15:06:33.073497   32699 kubeadm.go:403] StartCluster complete in 26.226728728s
	I0224 15:06:33.073518   32699 settings.go:142] acquiring lock: {Name:mk61f6764f7c264302b01ffc8eee0ee0f10d20c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:33.073608   32699 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:06:33.074109   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/kubeconfig: {Name:mk1182fefc6ba3b7ea2d2356c47127703005a4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:33.074361   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 15:06:33.074395   32699 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0224 15:06:33.074455   32699 addons.go:65] Setting storage-provisioner=true in profile "multinode-358000"
	I0224 15:06:33.074463   32699 addons.go:65] Setting default-storageclass=true in profile "multinode-358000"
	I0224 15:06:33.074467   32699 addons.go:227] Setting addon storage-provisioner=true in "multinode-358000"
	I0224 15:06:33.074487   32699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-358000"
	I0224 15:06:33.074515   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:06:33.074531   32699 host.go:66] Checking if "multinode-358000" exists ...
	I0224 15:06:33.074757   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:33.074809   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:06:33.074857   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:33.075814   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:06:33.078838   32699 cert_rotation.go:137] Starting client certificate rotation controller
	I0224 15:06:33.079199   32699 round_trippers.go:463] GET https://127.0.0.1:58093/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 15:06:33.079207   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:33.079216   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:33.079223   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:33.088272   32699 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0224 15:06:33.088287   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:33.088293   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:33.088298   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:33.088303   32699 round_trippers.go:580]     Content-Length: 291
	I0224 15:06:33.088307   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:33 GMT
	I0224 15:06:33.088311   32699 round_trippers.go:580]     Audit-Id: 59f5e50e-3e7b-44e1-910b-6dd59f461b2c
	I0224 15:06:33.088316   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:33.088320   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:33.088342   32699 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"299","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0224 15:06:33.088656   32699 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"299","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0224 15:06:33.088684   32699 round_trippers.go:463] PUT https://127.0.0.1:58093/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 15:06:33.088688   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:33.088694   32699 round_trippers.go:473]     Content-Type: application/json
	I0224 15:06:33.088700   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:33.088713   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:33.094819   32699 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0224 15:06:33.094834   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:33.094840   32699 round_trippers.go:580]     Audit-Id: 1e89de0f-c6c6-49ba-9ec3-e207c79c7611
	I0224 15:06:33.094845   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:33.094849   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:33.094855   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:33.094862   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:33.094867   32699 round_trippers.go:580]     Content-Length: 291
	I0224 15:06:33.094872   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:33 GMT
	I0224 15:06:33.094890   32699 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"311","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0224 15:06:33.141535   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:06:33.163361   32699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 15:06:33.163644   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:06:33.200673   32699 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 15:06:33.200693   32699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 15:06:33.200810   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:33.201017   32699 round_trippers.go:463] GET https://127.0.0.1:58093/apis/storage.k8s.io/v1/storageclasses
	I0224 15:06:33.201035   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:33.201048   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:33.201060   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:33.206583   32699 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0224 15:06:33.206607   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:33.206613   32699 round_trippers.go:580]     Audit-Id: fcdbcbdd-f0e1-4bd7-9082-3e9ae0db8220
	I0224 15:06:33.206624   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:33.206630   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:33.206636   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:33.206640   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:33.206645   32699 round_trippers.go:580]     Content-Length: 109
	I0224 15:06:33.206649   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:33 GMT
	I0224 15:06:33.206677   32699 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"316"},"items":[]}
	I0224 15:06:33.207035   32699 addons.go:227] Setting addon default-storageclass=true in "multinode-358000"
	I0224 15:06:33.207057   32699 host.go:66] Checking if "multinode-358000" exists ...
	I0224 15:06:33.207442   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:33.210737   32699 command_runner.go:130] > apiVersion: v1
	I0224 15:06:33.210777   32699 command_runner.go:130] > data:
	I0224 15:06:33.210785   32699 command_runner.go:130] >   Corefile: |
	I0224 15:06:33.210791   32699 command_runner.go:130] >     .:53 {
	I0224 15:06:33.210797   32699 command_runner.go:130] >         errors
	I0224 15:06:33.210804   32699 command_runner.go:130] >         health {
	I0224 15:06:33.210816   32699 command_runner.go:130] >            lameduck 5s
	I0224 15:06:33.210823   32699 command_runner.go:130] >         }
	I0224 15:06:33.210831   32699 command_runner.go:130] >         ready
	I0224 15:06:33.210848   32699 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0224 15:06:33.210856   32699 command_runner.go:130] >            pods insecure
	I0224 15:06:33.210868   32699 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0224 15:06:33.210881   32699 command_runner.go:130] >            ttl 30
	I0224 15:06:33.210888   32699 command_runner.go:130] >         }
	I0224 15:06:33.210896   32699 command_runner.go:130] >         prometheus :9153
	I0224 15:06:33.210903   32699 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0224 15:06:33.210915   32699 command_runner.go:130] >            max_concurrent 1000
	I0224 15:06:33.210922   32699 command_runner.go:130] >         }
	I0224 15:06:33.210931   32699 command_runner.go:130] >         cache 30
	I0224 15:06:33.210937   32699 command_runner.go:130] >         loop
	I0224 15:06:33.210944   32699 command_runner.go:130] >         reload
	I0224 15:06:33.210950   32699 command_runner.go:130] >         loadbalance
	I0224 15:06:33.210956   32699 command_runner.go:130] >     }
	I0224 15:06:33.210962   32699 command_runner.go:130] > kind: ConfigMap
	I0224 15:06:33.210969   32699 command_runner.go:130] > metadata:
	I0224 15:06:33.210985   32699 command_runner.go:130] >   creationTimestamp: "2023-02-24T23:06:20Z"
	I0224 15:06:33.210992   32699 command_runner.go:130] >   name: coredns
	I0224 15:06:33.210999   32699 command_runner.go:130] >   namespace: kube-system
	I0224 15:06:33.211004   32699 command_runner.go:130] >   resourceVersion: "227"
	I0224 15:06:33.211014   32699 command_runner.go:130] >   uid: 82f87881-3653-4247-95f2-0ea74ee5b71c
	I0224 15:06:33.211264   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 15:06:33.276686   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:33.280279   32699 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 15:06:33.280298   32699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 15:06:33.280390   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:33.342167   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:33.513398   32699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 15:06:33.567681   32699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 15:06:33.597274   32699 round_trippers.go:463] GET https://127.0.0.1:58093/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 15:06:33.597289   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:33.597296   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:33.597302   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:33.654729   32699 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0224 15:06:33.654755   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:33.654767   32699 round_trippers.go:580]     Audit-Id: 5d677796-dfd3-4af6-95e9-28f734425502
	I0224 15:06:33.654781   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:33.654792   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:33.654813   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:33.654833   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:33.654853   32699 round_trippers.go:580]     Content-Length: 291
	I0224 15:06:33.654869   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:33 GMT
	I0224 15:06:33.654925   32699 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"352","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 15:06:33.655027   32699 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-358000" context rescaled to 1 replicas
	I0224 15:06:33.655064   32699 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 15:06:33.677515   32699 out.go:177] * Verifying Kubernetes components...
	I0224 15:06:33.697324   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:06:33.764331   32699 command_runner.go:130] > configmap/coredns replaced
	I0224 15:06:33.764366   32699 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0224 15:06:33.972253   32699 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0224 15:06:33.972277   32699 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0224 15:06:33.972293   32699 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 15:06:33.972305   32699 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 15:06:33.972312   32699 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0224 15:06:33.972318   32699 command_runner.go:130] > pod/storage-provisioner created
	I0224 15:06:33.980295   32699 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0224 15:06:33.980435   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:34.002751   32699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0224 15:06:34.038875   32699 addons.go:492] enable addons completed in 964.411003ms: enabled=[storage-provisioner default-storageclass]
	I0224 15:06:34.063518   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:06:34.063795   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:06:34.064068   32699 node_ready.go:35] waiting up to 6m0s for node "multinode-358000" to be "Ready" ...
	I0224 15:06:34.064130   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:34.064139   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.064148   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.064154   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.067142   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:34.067157   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.067165   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.067170   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.067175   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.067183   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.067188   32699 round_trippers.go:580]     Audit-Id: f6e60995-4f9c-4d62-8d7a-661da686d1f9
	I0224 15:06:34.067208   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.067288   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:34.067718   32699 node_ready.go:49] node "multinode-358000" has status "Ready":"True"
	I0224 15:06:34.067728   32699 node_ready.go:38] duration metric: took 3.645908ms waiting for node "multinode-358000" to be "Ready" ...
	I0224 15:06:34.067737   32699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:06:34.067785   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:34.067790   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.067799   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.067810   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.071884   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:34.071910   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.071919   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.071925   32699 round_trippers.go:580]     Audit-Id: ffc6bee6-03f1-40ef-bf7c-4f88c340bc75
	I0224 15:06:34.071929   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.071934   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.071939   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.071944   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.073208   32699 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"364"},"items":[{"metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 60468 chars]
	I0224 15:06:34.076341   32699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-qfqth" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:34.076397   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:34.076403   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.076410   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.076431   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.079510   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:34.079522   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.079528   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.079533   32699 round_trippers.go:580]     Audit-Id: e39fe1fa-f969-4afa-a589-74a87a5ece31
	I0224 15:06:34.079537   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.079542   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.079548   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.079553   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.079614   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:34.079907   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:34.079914   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.079920   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.079951   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.082417   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:34.082432   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.082438   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.082443   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.082453   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.082462   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.082467   32699 round_trippers.go:580]     Audit-Id: ae2f2535-d29f-4fd8-a253-4ffb7bb3f078
	I0224 15:06:34.082472   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.082600   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:34.583877   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:34.583891   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.583897   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.583902   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.588173   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:34.588187   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.588193   32699 round_trippers.go:580]     Audit-Id: dd93584f-cf02-415f-be96-7eb3d1617e1d
	I0224 15:06:34.588201   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.588206   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.588211   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.588217   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.588222   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.588277   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:34.588597   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:34.588604   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.588610   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.588615   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.591422   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:34.591438   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.591446   32699 round_trippers.go:580]     Audit-Id: a4c2783d-b424-4fbc-9e4b-424ee77985e3
	I0224 15:06:34.591453   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.591460   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.591467   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.591473   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.591482   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.591599   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:35.084237   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:35.084257   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:35.084266   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:35.084272   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:35.086632   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:35.086647   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:35.086658   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:35 GMT
	I0224 15:06:35.086671   32699 round_trippers.go:580]     Audit-Id: c33e1d21-7927-4507-8ee6-c9a7fe5f1f18
	I0224 15:06:35.086678   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:35.086688   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:35.086694   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:35.086699   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:35.086761   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:35.087047   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:35.087054   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:35.087060   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:35.087066   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:35.107261   32699 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0224 15:06:35.107284   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:35.107290   32699 round_trippers.go:580]     Audit-Id: 5a4f5900-bafb-4535-a25d-1ff9a3d1ff53
	I0224 15:06:35.107295   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:35.107300   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:35.107304   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:35.107309   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:35.107317   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:35 GMT
	I0224 15:06:35.107407   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:35.583099   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:35.583118   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:35.583130   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:35.583140   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:35.586729   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:35.586739   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:35.586747   32699 round_trippers.go:580]     Audit-Id: f908a10e-08e1-4175-9df7-836d89c9e028
	I0224 15:06:35.586754   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:35.586759   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:35.586764   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:35.586769   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:35.586774   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:35 GMT
	I0224 15:06:35.586829   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:35.587118   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:35.587125   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:35.587131   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:35.587136   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:35.589668   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:35.589678   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:35.589684   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:35 GMT
	I0224 15:06:35.589689   32699 round_trippers.go:580]     Audit-Id: d1c142fe-bffa-4f98-9a10-535d936f9352
	I0224 15:06:35.589694   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:35.589699   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:35.589707   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:35.589713   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:35.589765   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:36.083190   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:36.083215   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:36.083227   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:36.083237   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:36.087044   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:36.087056   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:36.087062   32699 round_trippers.go:580]     Audit-Id: 995980ef-ebcc-4089-aa03-821777601be5
	I0224 15:06:36.087066   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:36.087071   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:36.087076   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:36.087081   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:36.087087   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:36 GMT
	I0224 15:06:36.087350   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:36.087621   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:36.087627   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:36.087633   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:36.087639   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:36.089942   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:36.089952   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:36.089957   32699 round_trippers.go:580]     Audit-Id: 36532142-5fe2-40a7-95af-2ee5ca2e58da
	I0224 15:06:36.089962   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:36.089971   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:36.089978   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:36.089987   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:36.089993   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:36 GMT
	I0224 15:06:36.090051   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:36.090230   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:36.583237   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:36.583257   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:36.583269   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:36.583279   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:36.587672   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:36.587682   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:36.587687   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:36.587692   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:36.587697   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:36.587702   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:36 GMT
	I0224 15:06:36.587707   32699 round_trippers.go:580]     Audit-Id: 7fe00f9e-914d-4514-8f45-48bdc4d07b92
	I0224 15:06:36.587712   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:36.587766   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:36.588034   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:36.588043   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:36.588048   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:36.588054   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:36.590204   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:36.590214   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:36.590220   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:36.590226   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:36.590232   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:36.590236   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:36 GMT
	I0224 15:06:36.590242   32699 round_trippers.go:580]     Audit-Id: 78bbddcc-de2f-4ace-9f55-99476266d294
	I0224 15:06:36.590246   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:36.590304   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:37.083042   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:37.083057   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:37.083065   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:37.083070   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:37.086096   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:37.086119   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:37.086130   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:37 GMT
	I0224 15:06:37.086139   32699 round_trippers.go:580]     Audit-Id: e63c6fe5-0042-4315-af87-4c2ad096f9d7
	I0224 15:06:37.086147   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:37.086154   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:37.086162   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:37.086169   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:37.087574   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:37.087974   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:37.087984   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:37.088003   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:37.088018   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:37.092371   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:37.092389   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:37.092403   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:37.092416   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:37.092429   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:37.092438   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:37.092447   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:37 GMT
	I0224 15:06:37.092455   32699 round_trippers.go:580]     Audit-Id: 7e095165-f88d-4681-a7de-bc9251f69917
	I0224 15:06:37.093143   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:37.583394   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:37.583407   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:37.583417   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:37.583423   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:37.585916   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:37.585930   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:37.585937   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:37.585942   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:37.585951   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:37.585957   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:37.585962   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:37 GMT
	I0224 15:06:37.585968   32699 round_trippers.go:580]     Audit-Id: 47028fd6-67b4-4d83-ac7f-e713c02c8f2a
	I0224 15:06:37.586034   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:37.586331   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:37.586338   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:37.586344   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:37.586349   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:37.588696   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:37.588708   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:37.588714   32699 round_trippers.go:580]     Audit-Id: 3b897d02-5e1d-4448-9565-e9a30c8f2965
	I0224 15:06:37.588718   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:37.588723   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:37.588727   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:37.588732   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:37.588737   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:37 GMT
	I0224 15:06:37.588808   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:38.083132   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:38.083147   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:38.083154   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:38.083159   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:38.085732   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:38.085749   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:38.085755   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:38.085760   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:38.085765   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:38.085770   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:38 GMT
	I0224 15:06:38.085775   32699 round_trippers.go:580]     Audit-Id: c0eaeffe-dec9-40b5-a8cb-eebeb3ccdb7f
	I0224 15:06:38.085781   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:38.086035   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:38.086327   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:38.086334   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:38.086341   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:38.086346   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:38.088401   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:38.088412   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:38.088423   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:38.088434   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:38.088440   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:38.088445   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:38 GMT
	I0224 15:06:38.088450   32699 round_trippers.go:580]     Audit-Id: a786a4f3-3e4c-4bca-a8b7-5b901c560b67
	I0224 15:06:38.088456   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:38.088537   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:38.583342   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:38.583361   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:38.583374   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:38.583384   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:38.587349   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:38.587363   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:38.587373   32699 round_trippers.go:580]     Audit-Id: bd3d2be4-828c-4d38-be39-239d29fd23be
	I0224 15:06:38.587380   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:38.587387   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:38.587394   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:38.587400   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:38.587407   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:38 GMT
	I0224 15:06:38.587534   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:38.587849   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:38.587857   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:38.587863   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:38.587884   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:38.590026   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:38.590035   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:38.590040   32699 round_trippers.go:580]     Audit-Id: 848a1ef4-3deb-4b3c-a972-d4968d44a0e1
	I0224 15:06:38.590045   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:38.590050   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:38.590058   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:38.590064   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:38.590069   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:38 GMT
	I0224 15:06:38.590208   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:38.590397   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:39.084507   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:39.084531   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:39.084595   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:39.084608   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:39.088901   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:39.088913   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:39.088920   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:39.088928   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:39 GMT
	I0224 15:06:39.088933   32699 round_trippers.go:580]     Audit-Id: 3503d2b4-9b4e-4c2e-ad85-ba95d6a785fa
	I0224 15:06:39.088938   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:39.088943   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:39.088948   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:39.089009   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:39.089290   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:39.089296   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:39.089302   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:39.089308   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:39.091262   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:39.091273   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:39.091278   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:39.091283   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:39.091288   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:39.091293   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:39.091300   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:39 GMT
	I0224 15:06:39.091304   32699 round_trippers.go:580]     Audit-Id: 03c9d938-e319-48aa-922c-6821f72a3e73
	I0224 15:06:39.091364   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:39.583511   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:39.583532   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:39.583545   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:39.583555   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:39.587519   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:39.587535   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:39.587543   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:39 GMT
	I0224 15:06:39.587549   32699 round_trippers.go:580]     Audit-Id: 012bb995-9d6d-41aa-8061-6e5b78ae9aa4
	I0224 15:06:39.587555   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:39.587562   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:39.587569   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:39.587576   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:39.587786   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:39.588131   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:39.588137   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:39.588143   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:39.588149   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:39.590493   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:39.590501   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:39.590507   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:39.590512   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:39.590517   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:39.590522   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:39.590527   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:39 GMT
	I0224 15:06:39.590531   32699 round_trippers.go:580]     Audit-Id: df736b49-434c-4d6e-8fac-ae8d8d4f96eb
	I0224 15:06:39.590584   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:40.083246   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:40.083259   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:40.083265   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:40.083270   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:40.086187   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:40.086204   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:40.086212   32699 round_trippers.go:580]     Audit-Id: 1b486a99-1510-4cbc-9da3-d9d93f190720
	I0224 15:06:40.086217   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:40.086222   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:40.086227   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:40.086232   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:40.086237   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:40 GMT
	I0224 15:06:40.086305   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:40.086617   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:40.086624   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:40.086630   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:40.086639   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:40.088643   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:40.088653   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:40.088659   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:40.088664   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:40.088672   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:40.088678   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:40 GMT
	I0224 15:06:40.088683   32699 round_trippers.go:580]     Audit-Id: fceee19d-6f69-43c3-a51f-359506400691
	I0224 15:06:40.088687   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:40.088971   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:40.583198   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:40.583211   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:40.583217   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:40.583222   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:40.585954   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:40.585968   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:40.585976   32699 round_trippers.go:580]     Audit-Id: 71c8a523-6605-4a45-a659-e1cdbeaf8b25
	I0224 15:06:40.585987   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:40.585998   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:40.586005   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:40.586017   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:40.586028   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:40 GMT
	I0224 15:06:40.586209   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:40.586579   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:40.586587   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:40.586593   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:40.586600   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:40.588824   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:40.588834   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:40.588841   32699 round_trippers.go:580]     Audit-Id: 531b4b6c-c7e8-470e-95a2-45f2bf1401b1
	I0224 15:06:40.588853   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:40.588859   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:40.588863   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:40.588869   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:40.588875   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:40 GMT
	I0224 15:06:40.589309   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:41.083194   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:41.083211   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:41.083217   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:41.083223   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:41.086091   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:41.086103   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:41.086113   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:41 GMT
	I0224 15:06:41.086122   32699 round_trippers.go:580]     Audit-Id: dbfc62ef-9936-4639-bf0d-cdb0e0062d8d
	I0224 15:06:41.086127   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:41.086132   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:41.086137   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:41.086142   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:41.086680   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:41.087097   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:41.087105   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:41.087112   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:41.087117   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:41.089810   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:41.089819   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:41.089824   32699 round_trippers.go:580]     Audit-Id: cff029cc-901c-4a45-b92f-c3fe303240d2
	I0224 15:06:41.089829   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:41.089834   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:41.089841   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:41.089847   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:41.089851   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:41 GMT
	I0224 15:06:41.089917   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:41.090106   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:41.583314   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:41.583333   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:41.583342   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:41.583351   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:41.586067   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:41.586086   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:41.586094   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:41.586100   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:41.586106   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:41 GMT
	I0224 15:06:41.586112   32699 round_trippers.go:580]     Audit-Id: b8ebdda6-13d5-49f7-97c7-e0144172429a
	I0224 15:06:41.586117   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:41.586122   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:41.586194   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:41.586531   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:41.586540   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:41.586547   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:41.586554   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:41.588652   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:41.588663   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:41.588668   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:41.588674   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:41.588680   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:41 GMT
	I0224 15:06:41.588687   32699 round_trippers.go:580]     Audit-Id: d2686503-c521-4a77-a9a7-5d65135f3900
	I0224 15:06:41.588692   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:41.588697   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:41.588776   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:42.083218   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:42.083233   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:42.083240   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:42.083245   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:42.088626   32699 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0224 15:06:42.088640   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:42.088646   32699 round_trippers.go:580]     Audit-Id: fb4dd226-a63b-4db7-818b-4ed7876118a1
	I0224 15:06:42.088654   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:42.088666   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:42.088671   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:42.088676   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:42.088682   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:42 GMT
	I0224 15:06:42.088759   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:42.089048   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:42.089054   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:42.089060   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:42.089065   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:42.091616   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:42.091634   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:42.091640   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:42 GMT
	I0224 15:06:42.091647   32699 round_trippers.go:580]     Audit-Id: 867da644-0d6b-48c1-8028-0a7f9249187f
	I0224 15:06:42.091654   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:42.091662   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:42.091670   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:42.091678   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:42.091987   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:42.583253   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:42.583266   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:42.583273   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:42.583278   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:42.586030   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:42.586041   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:42.586048   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:42.586055   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:42.586062   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:42.586069   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:42.586078   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:42 GMT
	I0224 15:06:42.586083   32699 round_trippers.go:580]     Audit-Id: 0f84a135-b43f-4f06-a3cc-a92e964c0f45
	I0224 15:06:42.586151   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:42.586442   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:42.586448   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:42.586454   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:42.586463   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:42.588375   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:42.588388   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:42.588395   32699 round_trippers.go:580]     Audit-Id: 302ddd83-a06d-43ad-b13e-1d536b3f3ac9
	I0224 15:06:42.588402   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:42.588413   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:42.588419   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:42.588425   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:42.588433   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:42 GMT
	I0224 15:06:42.588701   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:43.083596   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:43.083609   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:43.083616   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:43.083622   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:43.086333   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:43.086346   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:43.086354   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:43 GMT
	I0224 15:06:43.086361   32699 round_trippers.go:580]     Audit-Id: 6717d740-6401-445c-b19e-784d9e2fa204
	I0224 15:06:43.086368   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:43.086381   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:43.086425   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:43.086437   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:43.086567   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:43.086849   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:43.086855   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:43.086861   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:43.086867   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:43.089113   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:43.089122   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:43.089128   32699 round_trippers.go:580]     Audit-Id: a75ba1ec-fb2f-4629-8cdf-df16ad47ffbf
	I0224 15:06:43.089133   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:43.089138   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:43.089142   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:43.089148   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:43.089152   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:43 GMT
	I0224 15:06:43.089209   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:43.583432   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:43.583445   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:43.583452   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:43.583457   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:43.586015   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:43.586029   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:43.586035   32699 round_trippers.go:580]     Audit-Id: 14f80c86-f977-4a48-8fe9-de4353e53d5f
	I0224 15:06:43.586041   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:43.586047   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:43.586051   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:43.586056   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:43.586062   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:43 GMT
	I0224 15:06:43.586137   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:43.586471   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:43.586483   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:43.586507   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:43.586516   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:43.588983   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:43.588994   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:43.588999   32699 round_trippers.go:580]     Audit-Id: 451121de-8332-40bf-81d7-11f0982e5ee4
	I0224 15:06:43.589007   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:43.589012   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:43.589018   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:43.589023   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:43.589071   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:43 GMT
	I0224 15:06:43.589434   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:43.589650   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:44.083243   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:44.083259   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:44.083266   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:44.083272   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:44.086390   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:44.086405   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:44.086421   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:44.086434   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:44 GMT
	I0224 15:06:44.086440   32699 round_trippers.go:580]     Audit-Id: cf12a4e3-9bbc-4884-896d-3255641a3fb3
	I0224 15:06:44.086445   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:44.086450   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:44.086455   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:44.086521   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:44.086804   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:44.086811   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:44.086818   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:44.086824   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:44.089386   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:44.089401   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:44.089412   32699 round_trippers.go:580]     Audit-Id: e20c3ab3-1181-4fe7-a101-34b6d78a33e9
	I0224 15:06:44.089420   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:44.089427   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:44.089433   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:44.089437   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:44.089447   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:44 GMT
	I0224 15:06:44.089516   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:44.583428   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:44.583441   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:44.583448   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:44.583453   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:44.586688   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:44.586700   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:44.586707   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:44 GMT
	I0224 15:06:44.586712   32699 round_trippers.go:580]     Audit-Id: 342f699a-2d42-431e-9db3-f160a9cf3906
	I0224 15:06:44.586716   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:44.586721   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:44.586726   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:44.586731   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:44.586821   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:44.587144   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:44.587151   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:44.587157   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:44.587165   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:44.589840   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:44.589854   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:44.589861   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:44.589867   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:44.589873   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:44 GMT
	I0224 15:06:44.589882   32699 round_trippers.go:580]     Audit-Id: a5244e89-b702-409f-8a50-bb06ce14c86f
	I0224 15:06:44.589890   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:44.589895   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:44.589998   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:45.083486   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:45.083514   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:45.083527   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:45.083537   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:45.088220   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:45.088233   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:45.088239   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:45 GMT
	I0224 15:06:45.088247   32699 round_trippers.go:580]     Audit-Id: bdd0aa8d-81c7-47a0-89ef-a153b5cf6040
	I0224 15:06:45.088252   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:45.088256   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:45.088261   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:45.088267   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:45.088331   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:45.088620   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:45.088627   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:45.088633   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:45.088638   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:45.093148   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:45.093158   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:45.093164   32699 round_trippers.go:580]     Audit-Id: 65c1eb15-e3ee-4482-bcc8-edc840924893
	I0224 15:06:45.093168   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:45.093173   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:45.093177   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:45.093184   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:45.093189   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:45 GMT
	I0224 15:06:45.093561   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:45.583458   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:45.583473   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:45.583480   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:45.583486   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:45.586257   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:45.586272   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:45.586281   32699 round_trippers.go:580]     Audit-Id: 8cad5767-41cc-4996-98d5-2a50ce2f782b
	I0224 15:06:45.586288   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:45.586295   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:45.586302   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:45.586311   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:45.586324   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:45 GMT
	I0224 15:06:45.586435   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:45.586757   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:45.586766   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:45.586777   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:45.586790   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:45.588737   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:45.588748   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:45.588757   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:45.588764   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:45.588771   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:45 GMT
	I0224 15:06:45.588779   32699 round_trippers.go:580]     Audit-Id: 8181b63b-f365-4dc0-bd1a-86402dd6ca1a
	I0224 15:06:45.588786   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:45.588793   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:45.588885   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:46.084576   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:46.084597   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:46.084607   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:46.084615   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:46.087118   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:46.087132   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:46.087141   32699 round_trippers.go:580]     Audit-Id: 9f851fee-fea8-48b3-9fc9-d7ee9557c3a7
	I0224 15:06:46.087150   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:46.087161   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:46.087166   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:46.087171   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:46.087176   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:46 GMT
	I0224 15:06:46.087415   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:46.087702   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:46.087709   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:46.087716   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:46.087723   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:46.089997   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:46.090008   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:46.090017   32699 round_trippers.go:580]     Audit-Id: fce364b8-d2a3-4754-bb49-50ef8609511b
	I0224 15:06:46.090023   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:46.090034   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:46.090041   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:46.090047   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:46.090051   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:46 GMT
	I0224 15:06:46.090166   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:46.090390   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:46.583414   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:46.583429   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:46.583435   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:46.583441   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:46.586500   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:46.586514   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:46.586520   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:46.586526   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:46 GMT
	I0224 15:06:46.586534   32699 round_trippers.go:580]     Audit-Id: 857253ed-efc2-4dfa-ac67-d17f3872ce5b
	I0224 15:06:46.586540   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:46.586545   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:46.586552   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:46.586619   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:46.586922   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:46.586929   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:46.586935   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:46.586940   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:46.589529   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:46.589544   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:46.589556   32699 round_trippers.go:580]     Audit-Id: ccbe3dc1-9897-4756-828f-980280e97779
	I0224 15:06:46.589567   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:46.589584   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:46.589595   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:46.589614   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:46.589623   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:46 GMT
	I0224 15:06:46.589705   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:47.083407   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:47.083423   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:47.083434   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:47.083446   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:47.086263   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:47.086277   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:47.086285   32699 round_trippers.go:580]     Audit-Id: 3ad6c614-f9a7-4c7c-a180-ce9dd02e9ee8
	I0224 15:06:47.086293   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:47.086299   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:47.086305   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:47.086310   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:47.086315   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:47 GMT
	I0224 15:06:47.086375   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:47.086668   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:47.086676   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:47.086684   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:47.086692   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:47.088869   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:47.088890   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:47.088903   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:47 GMT
	I0224 15:06:47.088917   32699 round_trippers.go:580]     Audit-Id: c106395a-f7ec-4a32-b3a7-c37d81699edc
	I0224 15:06:47.088930   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:47.088939   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:47.088946   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:47.088955   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:47.089364   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:47.583369   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:47.583387   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:47.583420   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:47.583429   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:47.586003   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:47.586018   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:47.586027   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:47.586037   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:47.586050   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:47.586059   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:47 GMT
	I0224 15:06:47.586076   32699 round_trippers.go:580]     Audit-Id: bc728c5f-e7f8-471a-96bf-dc85feaafacc
	I0224 15:06:47.586085   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:47.586230   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:47.586511   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:47.586517   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:47.586524   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:47.586529   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:47.588859   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:47.588871   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:47.588878   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:47.588883   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:47 GMT
	I0224 15:06:47.588889   32699 round_trippers.go:580]     Audit-Id: c35753ed-ba23-424d-82ca-761877cf2eaf
	I0224 15:06:47.588893   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:47.588899   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:47.588904   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:47.589020   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:48.083414   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:48.083430   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:48.083468   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:48.083479   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:48.086645   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:48.086657   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:48.086663   32699 round_trippers.go:580]     Audit-Id: 69f5dd73-b4f6-4dc7-9954-3182aa53c2ad
	I0224 15:06:48.086668   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:48.086675   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:48.086682   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:48.086686   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:48.086691   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:48 GMT
	I0224 15:06:48.086919   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:48.087218   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:48.087226   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:48.087232   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:48.087237   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:48.089722   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:48.089734   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:48.089739   32699 round_trippers.go:580]     Audit-Id: a542bc98-af07-4fe3-9809-b08232980f34
	I0224 15:06:48.089744   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:48.089749   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:48.089754   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:48.089759   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:48.089764   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:48 GMT
	I0224 15:06:48.089836   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:48.583405   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:48.583423   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:48.583430   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:48.583464   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:48.586378   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:48.586391   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:48.586400   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:48 GMT
	I0224 15:06:48.586405   32699 round_trippers.go:580]     Audit-Id: 9306eae6-b3be-4e16-9324-cb841e563fd7
	I0224 15:06:48.586410   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:48.586415   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:48.586421   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:48.586426   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:48.586501   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:48.586812   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:48.586821   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:48.586827   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:48.586832   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:48.589322   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:48.589338   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:48.589345   32699 round_trippers.go:580]     Audit-Id: 364c83b4-06e1-4f4b-9c23-cb93113ff450
	I0224 15:06:48.589350   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:48.589360   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:48.589370   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:48.589377   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:48.589384   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:48 GMT
	I0224 15:06:48.589926   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:48.590224   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:49.083424   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:49.083441   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.083447   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.083453   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.086189   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.086200   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.086206   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.086212   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.086217   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.086224   32699 round_trippers.go:580]     Audit-Id: 62e6b8f7-6848-4c98-89f5-c4dd996da150
	I0224 15:06:49.086232   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.086237   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.086506   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0224 15:06:49.086800   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.086806   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.086812   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.086818   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.088919   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.088929   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.088935   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.088942   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.088953   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.088965   32699 round_trippers.go:580]     Audit-Id: 350eb7d4-fcdc-4dcb-9cdf-dc49beeb7c0d
	I0224 15:06:49.088976   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.088985   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.089201   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.089391   32699 pod_ready.go:92] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.089404   32699 pod_ready.go:81] duration metric: took 15.012594069s waiting for pod "coredns-787d4945fb-qfqth" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.089418   32699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-tkkfd" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.089456   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-tkkfd
	I0224 15:06:49.089461   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.089467   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.089472   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.091828   32699 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0224 15:06:49.091838   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.091844   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.091849   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.091854   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.091859   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.091864   32699 round_trippers.go:580]     Content-Length: 216
	I0224 15:06:49.091870   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.091875   32699 round_trippers.go:580]     Audit-Id: bbe3c541-f7e2-46f5-8cdc-5f2937304e1c
	I0224 15:06:49.091889   32699 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-tkkfd\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-tkkfd","kind":"pods"},"code":404}
	I0224 15:06:49.092009   32699 pod_ready.go:97] error getting pod "coredns-787d4945fb-tkkfd" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-tkkfd" not found
	I0224 15:06:49.092016   32699 pod_ready.go:81] duration metric: took 2.59101ms waiting for pod "coredns-787d4945fb-tkkfd" in "kube-system" namespace to be "Ready" ...
	E0224 15:06:49.092022   32699 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-tkkfd" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-tkkfd" not found
	I0224 15:06:49.092026   32699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.092058   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/etcd-multinode-358000
	I0224 15:06:49.092062   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.092068   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.092074   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.094081   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:49.094091   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.094097   32699 round_trippers.go:580]     Audit-Id: fb3e70a9-4c35-489e-abbc-f5f45ee3eeb1
	I0224 15:06:49.094102   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.094107   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.094112   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.094117   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.094122   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.094168   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-358000","namespace":"kube-system","uid":"cae08591-19d2-4e50-ba6b-73cf4552218c","resourceVersion":"282","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f0e985c72569733baf436fab6e966c50","kubernetes.io/config.mirror":"f0e985c72569733baf436fab6e966c50","kubernetes.io/config.seen":"2023-02-24T23:06:20.399469529Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0224 15:06:49.094397   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.094403   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.094409   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.094414   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.096860   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.096871   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.096877   32699 round_trippers.go:580]     Audit-Id: f950b3a3-437f-4ed0-b111-65a481c05b81
	I0224 15:06:49.096883   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.096888   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.096893   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.096898   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.096903   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.097038   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.097224   32699 pod_ready.go:92] pod "etcd-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.097229   32699 pod_ready.go:81] duration metric: took 5.198124ms waiting for pod "etcd-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.097236   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.097265   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-358000
	I0224 15:06:49.097270   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.097275   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.097282   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.099874   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.099887   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.099895   32699 round_trippers.go:580]     Audit-Id: f03e93ef-890b-4c13-9d3b-38d71ca34966
	I0224 15:06:49.099904   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.099909   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.099915   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.099920   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.099925   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.099995   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-358000","namespace":"kube-system","uid":"9f99728a-c30f-46f0-aa6c-914ce4f95c85","resourceVersion":"385","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"59d82eb7ee48c62b0e1b1157e56efad8","kubernetes.io/config.mirror":"59d82eb7ee48c62b0e1b1157e56efad8","kubernetes.io/config.seen":"2023-02-24T23:06:20.399481307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0224 15:06:49.100269   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.100275   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.100281   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.100287   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.102487   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.102497   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.102503   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.102508   32699 round_trippers.go:580]     Audit-Id: f413e262-abcf-4002-86d8-553b3ac7c508
	I0224 15:06:49.102516   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.102521   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.102526   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.102531   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.102634   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.102828   32699 pod_ready.go:92] pod "kube-apiserver-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.102836   32699 pod_ready.go:81] duration metric: took 5.594382ms waiting for pod "kube-apiserver-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.102842   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.102883   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-358000
	I0224 15:06:49.102890   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.102908   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.102917   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.105312   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.105322   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.105327   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.105332   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.105338   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.105342   32699 round_trippers.go:580]     Audit-Id: aa07d30a-3b52-4495-8b48-ed59f36ae7c8
	I0224 15:06:49.105349   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.105357   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.105441   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-358000","namespace":"kube-system","uid":"6d26b160-2631-4696-9633-0da5de0f9e6c","resourceVersion":"284","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"caa2ad2bdf9da4ce166e0faa9958c6ef","kubernetes.io/config.mirror":"caa2ad2bdf9da4ce166e0faa9958c6ef","kubernetes.io/config.seen":"2023-02-24T23:06:20.399482388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0224 15:06:49.105707   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.105713   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.105718   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.105723   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.108015   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.108028   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.108035   32699 round_trippers.go:580]     Audit-Id: b5961f1c-25ec-41f1-ae7d-2f8099da22f3
	I0224 15:06:49.108056   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.108064   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.108068   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.108073   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.108078   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.108182   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.108379   32699 pod_ready.go:92] pod "kube-controller-manager-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.108387   32699 pod_ready.go:81] duration metric: took 5.538342ms waiting for pod "kube-controller-manager-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.108395   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rsf5q" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.108429   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-rsf5q
	I0224 15:06:49.108433   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.108439   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.108445   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.110552   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.110570   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.110581   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.110591   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.110603   32699 round_trippers.go:580]     Audit-Id: e0a50677-0fe7-4a42-93bb-c7431a7273bd
	I0224 15:06:49.110611   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.110619   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.110624   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.110680   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rsf5q","generateName":"kube-proxy-","namespace":"kube-system","uid":"34fab1a9-3416-47c1-9239-d7276b496a73","resourceVersion":"389","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0224 15:06:49.284894   32699 request.go:622] Waited for 173.950227ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.284955   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.284963   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.284973   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.284981   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.287797   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.287808   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.287814   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.287819   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.287824   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.287829   32699 round_trippers.go:580]     Audit-Id: 2f9d3be0-92bc-4008-95fd-a502340f4527
	I0224 15:06:49.287834   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.287839   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.287912   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.288105   32699 pod_ready.go:92] pod "kube-proxy-rsf5q" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.288111   32699 pod_ready.go:81] duration metric: took 179.70588ms waiting for pod "kube-proxy-rsf5q" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.288117   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.483832   32699 request.go:622] Waited for 195.668979ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-358000
	I0224 15:06:49.483891   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-358000
	I0224 15:06:49.483926   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.483938   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.483956   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.487985   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:49.487996   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.488001   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.488012   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.488017   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.488022   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.488027   32699 round_trippers.go:580]     Audit-Id: 26299cf4-e251-46bc-b002-c0918acae9e0
	I0224 15:06:49.488032   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.488089   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-358000","namespace":"kube-system","uid":"f1b648f4-a02a-4931-a791-578a6dba081f","resourceVersion":"281","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1df55cdda41cb4d1ff214c8ea21fdc45","kubernetes.io/config.mirror":"1df55cdda41cb4d1ff214c8ea21fdc45","kubernetes.io/config.seen":"2023-02-24T23:06:20.399486321Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0224 15:06:49.683832   32699 request.go:622] Waited for 195.495235ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.683919   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.683928   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.683940   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.683950   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.687895   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:49.687905   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.687911   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.687916   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.687921   32699 round_trippers.go:580]     Audit-Id: f7df7291-7f4f-4c6d-96b0-dddf5f5dc535
	I0224 15:06:49.687926   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.687931   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.687936   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.687989   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.688194   32699 pod_ready.go:92] pod "kube-scheduler-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.688200   32699 pod_ready.go:81] duration metric: took 400.066577ms waiting for pod "kube-scheduler-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.688206   32699 pod_ready.go:38] duration metric: took 15.619991597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:06:49.688220   32699 api_server.go:51] waiting for apiserver process to appear ...
	I0224 15:06:49.688277   32699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:06:49.697953   32699 command_runner.go:130] > 1929
	I0224 15:06:49.698643   32699 api_server.go:71] duration metric: took 16.043064121s to wait for apiserver process to appear ...
	I0224 15:06:49.698653   32699 api_server.go:87] waiting for apiserver healthz status ...
	I0224 15:06:49.698664   32699 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58093/healthz ...
	I0224 15:06:49.704202   32699 api_server.go:278] https://127.0.0.1:58093/healthz returned 200:
	ok
	I0224 15:06:49.704236   32699 round_trippers.go:463] GET https://127.0.0.1:58093/version
	I0224 15:06:49.704241   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.704247   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.704253   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.705598   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:49.705607   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.705613   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.705618   32699 round_trippers.go:580]     Audit-Id: 8c19920b-abe0-425e-8f0f-3180324a9838
	I0224 15:06:49.705623   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.705628   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.705633   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.705638   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.705647   32699 round_trippers.go:580]     Content-Length: 263
	I0224 15:06:49.705656   32699 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0224 15:06:49.705701   32699 api_server.go:140] control plane version: v1.26.1
	I0224 15:06:49.705708   32699 api_server.go:130] duration metric: took 7.050391ms to wait for apiserver health ...
	I0224 15:06:49.705718   32699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 15:06:49.884067   32699 request.go:622] Waited for 178.291018ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:49.884153   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:49.884166   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.884183   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.884200   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.889202   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:49.889214   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.889220   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.889225   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.889246   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.889255   32699 round_trippers.go:580]     Audit-Id: 6a368721-9803-433e-9b62-65240f2912d3
	I0224 15:06:49.889262   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.889268   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.890070   32699 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0224 15:06:49.891331   32699 system_pods.go:59] 8 kube-system pods found
	I0224 15:06:49.891341   32699 system_pods.go:61] "coredns-787d4945fb-qfqth" [e37f5c65-d431-4ae8-9447-b6d61ee81dcd] Running
	I0224 15:06:49.891346   32699 system_pods.go:61] "etcd-multinode-358000" [cae08591-19d2-4e50-ba6b-73cf4552218c] Running
	I0224 15:06:49.891349   32699 system_pods.go:61] "kindnet-894f4" [75e84b3d-db2e-44fe-8674-95848e8b8051] Running
	I0224 15:06:49.891353   32699 system_pods.go:61] "kube-apiserver-multinode-358000" [9f99728a-c30f-46f0-aa6c-914ce4f95c85] Running
	I0224 15:06:49.891357   32699 system_pods.go:61] "kube-controller-manager-multinode-358000" [6d26b160-2631-4696-9633-0da5de0f9e6c] Running
	I0224 15:06:49.891361   32699 system_pods.go:61] "kube-proxy-rsf5q" [34fab1a9-3416-47c1-9239-d7276b496a73] Running
	I0224 15:06:49.891366   32699 system_pods.go:61] "kube-scheduler-multinode-358000" [f1b648f4-a02a-4931-a791-578a6dba081f] Running
	I0224 15:06:49.891370   32699 system_pods.go:61] "storage-provisioner" [ae236ae0-e586-40c5-804d-f33bc98c250a] Running
	I0224 15:06:49.891388   32699 system_pods.go:74] duration metric: took 185.659294ms to wait for pod list to return data ...
	I0224 15:06:49.891397   32699 default_sa.go:34] waiting for default service account to be created ...
	I0224 15:06:50.083623   32699 request.go:622] Waited for 192.174934ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/default/serviceaccounts
	I0224 15:06:50.083673   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/default/serviceaccounts
	I0224 15:06:50.083680   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:50.083692   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:50.083745   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:50.088216   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:50.088226   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:50.088232   32699 round_trippers.go:580]     Audit-Id: 7ce2d208-e05a-40a1-a64a-6d2760c6594a
	I0224 15:06:50.088237   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:50.088242   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:50.088247   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:50.088252   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:50.088257   32699 round_trippers.go:580]     Content-Length: 261
	I0224 15:06:50.088263   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:50 GMT
	I0224 15:06:50.088277   32699 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"350a72df-ca83-4097-88f4-faff47ce9565","resourceVersion":"304","creationTimestamp":"2023-02-24T23:06:32Z"}}]}
	I0224 15:06:50.088397   32699 default_sa.go:45] found service account: "default"
	I0224 15:06:50.088404   32699 default_sa.go:55] duration metric: took 196.995923ms for default service account to be created ...
	I0224 15:06:50.088410   32699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 15:06:50.283932   32699 request.go:622] Waited for 195.475279ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:50.284027   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:50.284037   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:50.284050   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:50.284060   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:50.289630   32699 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0224 15:06:50.289643   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:50.289649   32699 round_trippers.go:580]     Audit-Id: 8c99a1bd-af97-464c-907e-20d9c4d3df13
	I0224 15:06:50.289654   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:50.289658   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:50.289663   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:50.289668   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:50.289675   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:50 GMT
	I0224 15:06:50.290064   32699 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0224 15:06:50.291326   32699 system_pods.go:86] 8 kube-system pods found
	I0224 15:06:50.291336   32699 system_pods.go:89] "coredns-787d4945fb-qfqth" [e37f5c65-d431-4ae8-9447-b6d61ee81dcd] Running
	I0224 15:06:50.291340   32699 system_pods.go:89] "etcd-multinode-358000" [cae08591-19d2-4e50-ba6b-73cf4552218c] Running
	I0224 15:06:50.291344   32699 system_pods.go:89] "kindnet-894f4" [75e84b3d-db2e-44fe-8674-95848e8b8051] Running
	I0224 15:06:50.291348   32699 system_pods.go:89] "kube-apiserver-multinode-358000" [9f99728a-c30f-46f0-aa6c-914ce4f95c85] Running
	I0224 15:06:50.291352   32699 system_pods.go:89] "kube-controller-manager-multinode-358000" [6d26b160-2631-4696-9633-0da5de0f9e6c] Running
	I0224 15:06:50.291356   32699 system_pods.go:89] "kube-proxy-rsf5q" [34fab1a9-3416-47c1-9239-d7276b496a73] Running
	I0224 15:06:50.291360   32699 system_pods.go:89] "kube-scheduler-multinode-358000" [f1b648f4-a02a-4931-a791-578a6dba081f] Running
	I0224 15:06:50.291363   32699 system_pods.go:89] "storage-provisioner" [ae236ae0-e586-40c5-804d-f33bc98c250a] Running
	I0224 15:06:50.291368   32699 system_pods.go:126] duration metric: took 202.947965ms to wait for k8s-apps to be running ...
	I0224 15:06:50.291373   32699 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 15:06:50.291413   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:06:50.301704   32699 system_svc.go:56] duration metric: took 10.326484ms WaitForService to wait for kubelet.
	I0224 15:06:50.301718   32699 kubeadm.go:578] duration metric: took 16.646122501s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 15:06:50.301734   32699 node_conditions.go:102] verifying NodePressure condition ...
	I0224 15:06:50.483579   32699 request.go:622] Waited for 181.795065ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes
	I0224 15:06:50.483665   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes
	I0224 15:06:50.483676   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:50.483687   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:50.483698   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:50.487452   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:50.487463   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:50.487469   32699 round_trippers.go:580]     Audit-Id: 6264b331-4a1a-48bc-9f57-ac56e028901a
	I0224 15:06:50.487474   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:50.487481   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:50.487487   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:50.487491   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:50.487496   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:50 GMT
	I0224 15:06:50.487569   32699 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5007 chars]
	I0224 15:06:50.487794   32699 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:06:50.487807   32699 node_conditions.go:123] node cpu capacity is 6
	I0224 15:06:50.487817   32699 node_conditions.go:105] duration metric: took 186.072966ms to run NodePressure ...
	I0224 15:06:50.487827   32699 start.go:228] waiting for startup goroutines ...
	I0224 15:06:50.487833   32699 start.go:233] waiting for cluster config update ...
	I0224 15:06:50.487843   32699 start.go:242] writing updated cluster config ...
	I0224 15:06:50.509489   32699 out.go:177] 
	I0224 15:06:50.530894   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:06:50.531005   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:06:50.553416   32699 out.go:177] * Starting worker node multinode-358000-m02 in cluster multinode-358000
	I0224 15:06:50.575234   32699 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:06:50.596258   32699 out.go:177] * Pulling base image ...
	I0224 15:06:50.638414   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:06:50.638456   32699 cache.go:57] Caching tarball of preloaded images
	I0224 15:06:50.638416   32699 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:06:50.638651   32699 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:06:50.638672   32699 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 15:06:50.638778   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:06:50.696096   32699 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:06:50.696131   32699 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:06:50.696154   32699 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:06:50.696186   32699 start.go:364] acquiring machines lock for multinode-358000-m02: {Name:mk956cff82cb268a03a2fa83764d58115b1b74f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:06:50.696338   32699 start.go:368] acquired machines lock for "multinode-358000-m02" in 140.575µs
	I0224 15:06:50.696377   32699 start.go:93] Provisioning new machine with config: &{Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 15:06:50.696449   32699 start.go:125] createHost starting for "m02" (driver="docker")
	I0224 15:06:50.718122   32699 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 15:06:50.718286   32699 start.go:159] libmachine.API.Create for "multinode-358000" (driver="docker")
	I0224 15:06:50.718323   32699 client.go:168] LocalClient.Create starting
	I0224 15:06:50.718519   32699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
	I0224 15:06:50.718610   32699 main.go:141] libmachine: Decoding PEM data...
	I0224 15:06:50.718635   32699 main.go:141] libmachine: Parsing certificate...
	I0224 15:06:50.718728   32699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
	I0224 15:06:50.718792   32699 main.go:141] libmachine: Decoding PEM data...
	I0224 15:06:50.718807   32699 main.go:141] libmachine: Parsing certificate...
	I0224 15:06:50.739049   32699 cli_runner.go:164] Run: docker network inspect multinode-358000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 15:06:50.795271   32699 network_create.go:76] Found existing network {name:multinode-358000 subnet:0xc00169c3c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0224 15:06:50.795309   32699 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-358000-m02" container
	I0224 15:06:50.795431   32699 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 15:06:50.852310   32699 cli_runner.go:164] Run: docker volume create multinode-358000-m02 --label name.minikube.sigs.k8s.io=multinode-358000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0224 15:06:50.907785   32699 oci.go:103] Successfully created a docker volume multinode-358000-m02
	I0224 15:06:50.907904   32699 cli_runner.go:164] Run: docker run --rm --name multinode-358000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-358000-m02 --entrypoint /usr/bin/test -v multinode-358000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 15:06:51.344944   32699 oci.go:107] Successfully prepared a docker volume multinode-358000-m02
	I0224 15:06:51.344977   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:06:51.344989   32699 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 15:06:51.345107   32699 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-358000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 15:06:57.969175   32699 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-358000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.623816766s)
	I0224 15:06:57.969197   32699 kic.go:199] duration metric: took 6.624007 seconds to extract preloaded images to volume
	I0224 15:06:57.969310   32699 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 15:06:58.114499   32699 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-358000-m02 --name multinode-358000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-358000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-358000-m02 --network multinode-358000 --ip 192.168.58.3 --volume multinode-358000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 15:06:58.466019   32699 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Running}}
	I0224 15:06:58.531727   32699 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Status}}
	I0224 15:06:58.596177   32699 cli_runner.go:164] Run: docker exec multinode-358000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0224 15:06:58.713610   32699 oci.go:144] the created container "multinode-358000-m02" has a running status.
	I0224 15:06:58.713643   32699 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa...
	I0224 15:06:58.902350   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 15:06:58.902415   32699 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 15:06:59.007556   32699 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Status}}
	I0224 15:06:59.070105   32699 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 15:06:59.070124   32699 kic_runner.go:114] Args: [docker exec --privileged multinode-358000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 15:06:59.173186   32699 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Status}}
	I0224 15:06:59.231411   32699 machine.go:88] provisioning docker machine ...
	I0224 15:06:59.231457   32699 ubuntu.go:169] provisioning hostname "multinode-358000-m02"
	I0224 15:06:59.231559   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:06:59.291513   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:59.291904   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:06:59.291918   32699 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-358000-m02 && echo "multinode-358000-m02" | sudo tee /etc/hostname
	I0224 15:06:59.436543   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-358000-m02
	
	I0224 15:06:59.436633   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:06:59.495237   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:59.495598   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:06:59.495615   32699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-358000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-358000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-358000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:06:59.630904   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:06:59.630930   32699 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:06:59.630949   32699 ubuntu.go:177] setting up certificates
	I0224 15:06:59.630959   32699 provision.go:83] configureAuth start
	I0224 15:06:59.631042   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000-m02
	I0224 15:06:59.688571   32699 provision.go:138] copyHostCerts
	I0224 15:06:59.688627   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:06:59.688699   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:06:59.688704   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:06:59.688845   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:06:59.689012   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:06:59.689046   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:06:59.689051   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:06:59.689116   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:06:59.689236   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:06:59.689280   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:06:59.689284   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:06:59.689348   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:06:59.689469   32699 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.multinode-358000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-358000-m02]
	I0224 15:06:59.878774   32699 provision.go:172] copyRemoteCerts
	I0224 15:06:59.878846   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:06:59.878903   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:06:59.936002   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:00.031564   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 15:07:00.031644   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:07:00.049713   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 15:07:00.049807   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0224 15:07:00.067036   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 15:07:00.067123   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 15:07:00.084636   32699 provision.go:86] duration metric: configureAuth took 453.653262ms
	I0224 15:07:00.084649   32699 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:07:00.084805   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:07:00.084870   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:00.143243   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:07:00.143586   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:07:00.143597   32699 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:07:00.278545   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:07:00.278557   32699 ubuntu.go:71] root file system type: overlay
	I0224 15:07:00.278649   32699 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:07:00.278747   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:00.338973   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:07:00.339338   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:07:00.339388   32699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:07:00.482878   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:07:00.482984   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:00.543590   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:07:00.543961   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:07:00.543984   32699 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:07:01.181738   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:07:00.480379297 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 15:07:01.181761   32699 machine.go:91] provisioned docker machine in 1.95026147s
	I0224 15:07:01.181767   32699 client.go:171] LocalClient.Create took 10.463124678s
	I0224 15:07:01.181784   32699 start.go:167] duration metric: libmachine.API.Create for "multinode-358000" took 10.463186209s
	I0224 15:07:01.181790   32699 start.go:300] post-start starting for "multinode-358000-m02" (driver="docker")
	I0224 15:07:01.181795   32699 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:07:01.181875   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:07:01.181930   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:01.241583   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:01.338747   32699 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:07:01.343201   32699 command_runner.go:130] > NAME="Ubuntu"
	I0224 15:07:01.343212   32699 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0224 15:07:01.343216   32699 command_runner.go:130] > ID=ubuntu
	I0224 15:07:01.343220   32699 command_runner.go:130] > ID_LIKE=debian
	I0224 15:07:01.343225   32699 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0224 15:07:01.343229   32699 command_runner.go:130] > VERSION_ID="20.04"
	I0224 15:07:01.343233   32699 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0224 15:07:01.343237   32699 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0224 15:07:01.343242   32699 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0224 15:07:01.343248   32699 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0224 15:07:01.343256   32699 command_runner.go:130] > VERSION_CODENAME=focal
	I0224 15:07:01.343260   32699 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0224 15:07:01.343307   32699 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:07:01.343323   32699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:07:01.343329   32699 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:07:01.343334   32699 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:07:01.343341   32699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:07:01.343441   32699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:07:01.343604   32699 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:07:01.343610   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /etc/ssl/certs/268712.pem
	I0224 15:07:01.343794   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:07:01.351806   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:07:01.371381   32699 start.go:303] post-start completed in 189.577003ms
	I0224 15:07:01.371906   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000-m02
	I0224 15:07:01.430527   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:07:01.430959   32699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:07:01.431019   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:01.490241   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:01.582412   32699 command_runner.go:130] > 6%!
	(MISSING)I0224 15:07:01.582507   32699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:07:01.586996   32699 command_runner.go:130] > 92G
	I0224 15:07:01.587317   32699 start.go:128] duration metric: createHost completed in 10.890535302s
	I0224 15:07:01.587329   32699 start.go:83] releasing machines lock for "multinode-358000-m02", held for 10.890657634s
	I0224 15:07:01.587425   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000-m02
	I0224 15:07:01.670143   32699 out.go:177] * Found network options:
	I0224 15:07:01.691081   32699 out.go:177]   - NO_PROXY=192.168.58.2
	W0224 15:07:01.729153   32699 proxy.go:119] fail to check proxy env: Error ip not in block
	W0224 15:07:01.729188   32699 proxy.go:119] fail to check proxy env: Error ip not in block
	I0224 15:07:01.729282   32699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 15:07:01.729333   32699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 15:07:01.729352   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:01.729425   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:01.792549   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:01.792604   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:01.885889   32699 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0224 15:07:01.885906   32699 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0224 15:07:01.885911   32699 command_runner.go:130] > Device: f2h/242d	Inode: 2885207     Links: 1
	I0224 15:07:01.885916   32699 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 15:07:01.885927   32699 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0224 15:07:01.885940   32699 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0224 15:07:01.885953   32699 command_runner.go:130] > Change: 2023-02-24 22:41:56.862825099 +0000
	I0224 15:07:01.885963   32699 command_runner.go:130] >  Birth: -
	I0224 15:07:01.886033   32699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 15:07:01.938812   32699 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 15:07:01.938851   32699 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 15:07:01.938913   32699 ssh_runner.go:195] Run: which cri-dockerd
	I0224 15:07:01.943256   32699 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 15:07:01.943385   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 15:07:01.951642   32699 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 15:07:01.964604   32699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 15:07:01.979412   32699 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0224 15:07:01.979438   32699 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 15:07:01.979445   32699 start.go:485] detecting cgroup driver to use...
	I0224 15:07:01.979456   32699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:07:01.979532   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:07:01.992106   32699 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 15:07:01.992119   32699 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 15:07:01.992867   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 15:07:02.001390   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:07:02.010157   32699 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:07:02.010219   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:07:02.018937   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:07:02.027400   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:07:02.035890   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:07:02.044374   32699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:07:02.052607   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:07:02.061765   32699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:07:02.068551   32699 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 15:07:02.069298   32699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:07:02.076586   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:07:02.148349   32699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:07:02.224253   32699 start.go:485] detecting cgroup driver to use...
	I0224 15:07:02.224271   32699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:07:02.224330   32699 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:07:02.234831   32699 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0224 15:07:02.235098   32699 command_runner.go:130] > [Unit]
	I0224 15:07:02.235112   32699 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 15:07:02.235123   32699 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 15:07:02.235132   32699 command_runner.go:130] > BindsTo=containerd.service
	I0224 15:07:02.235142   32699 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0224 15:07:02.235162   32699 command_runner.go:130] > Wants=network-online.target
	I0224 15:07:02.235172   32699 command_runner.go:130] > Requires=docker.socket
	I0224 15:07:02.235178   32699 command_runner.go:130] > StartLimitBurst=3
	I0224 15:07:02.235183   32699 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 15:07:02.235200   32699 command_runner.go:130] > [Service]
	I0224 15:07:02.235203   32699 command_runner.go:130] > Type=notify
	I0224 15:07:02.235207   32699 command_runner.go:130] > Restart=on-failure
	I0224 15:07:02.235233   32699 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0224 15:07:02.235240   32699 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 15:07:02.235255   32699 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 15:07:02.235276   32699 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 15:07:02.235306   32699 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 15:07:02.235321   32699 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 15:07:02.235344   32699 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 15:07:02.235372   32699 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 15:07:02.235389   32699 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 15:07:02.235396   32699 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 15:07:02.235400   32699 command_runner.go:130] > ExecStart=
	I0224 15:07:02.235414   32699 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0224 15:07:02.235420   32699 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 15:07:02.235426   32699 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 15:07:02.235431   32699 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 15:07:02.235434   32699 command_runner.go:130] > LimitNOFILE=infinity
	I0224 15:07:02.235438   32699 command_runner.go:130] > LimitNPROC=infinity
	I0224 15:07:02.235443   32699 command_runner.go:130] > LimitCORE=infinity
	I0224 15:07:02.235463   32699 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 15:07:02.235469   32699 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 15:07:02.235492   32699 command_runner.go:130] > TasksMax=infinity
	I0224 15:07:02.235496   32699 command_runner.go:130] > TimeoutStartSec=0
	I0224 15:07:02.235503   32699 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 15:07:02.235507   32699 command_runner.go:130] > Delegate=yes
	I0224 15:07:02.235515   32699 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 15:07:02.235519   32699 command_runner.go:130] > KillMode=process
	I0224 15:07:02.235523   32699 command_runner.go:130] > [Install]
	I0224 15:07:02.235526   32699 command_runner.go:130] > WantedBy=multi-user.target
	I0224 15:07:02.236163   32699 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:07:02.236230   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:07:02.246828   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:07:02.261085   32699 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 15:07:02.261103   32699 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 15:07:02.261841   32699 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:07:02.364422   32699 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:07:02.455679   32699 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:07:02.455697   32699 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:07:02.469367   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:07:02.555761   32699 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:07:02.810811   32699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:07:02.879353   32699 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0224 15:07:02.879426   32699 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 15:07:02.952848   32699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:07:03.021948   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:07:03.097835   32699 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 15:07:03.109230   32699 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 15:07:03.109310   32699 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 15:07:03.113364   32699 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 15:07:03.113377   32699 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 15:07:03.113383   32699 command_runner.go:130] > Device: 100013h/1048595d	Inode: 206         Links: 1
	I0224 15:07:03.113389   32699 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0224 15:07:03.113398   32699 command_runner.go:130] > Access: 2023-02-24 23:07:03.105379448 +0000
	I0224 15:07:03.113427   32699 command_runner.go:130] > Modify: 2023-02-24 23:07:03.105379448 +0000
	I0224 15:07:03.113435   32699 command_runner.go:130] > Change: 2023-02-24 23:07:03.106379448 +0000
	I0224 15:07:03.113439   32699 command_runner.go:130] >  Birth: -
	I0224 15:07:03.113471   32699 start.go:553] Will wait 60s for crictl version
	I0224 15:07:03.113515   32699 ssh_runner.go:195] Run: which crictl
	I0224 15:07:03.117227   32699 command_runner.go:130] > /usr/bin/crictl
	I0224 15:07:03.117280   32699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 15:07:03.208024   32699 command_runner.go:130] > Version:  0.1.0
	I0224 15:07:03.208050   32699 command_runner.go:130] > RuntimeName:  docker
	I0224 15:07:03.208057   32699 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0224 15:07:03.208063   32699 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 15:07:03.210415   32699 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 15:07:03.210495   32699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:07:03.235616   32699 command_runner.go:130] > 23.0.1
	I0224 15:07:03.235698   32699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:07:03.258598   32699 command_runner.go:130] > 23.0.1
	I0224 15:07:03.302840   32699 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 15:07:03.324952   32699 out.go:177]   - env NO_PROXY=192.168.58.2
	I0224 15:07:03.347134   32699 cli_runner.go:164] Run: docker exec -t multinode-358000-m02 dig +short host.docker.internal
	I0224 15:07:03.465504   32699 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:07:03.465611   32699 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:07:03.470058   32699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:07:03.480042   32699 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000 for IP: 192.168.58.3
	I0224 15:07:03.480069   32699 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:07:03.480254   32699 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:07:03.480333   32699 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:07:03.480344   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 15:07:03.480369   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 15:07:03.480387   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 15:07:03.480410   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 15:07:03.480501   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:07:03.480545   32699 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:07:03.480556   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:07:03.480591   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:07:03.480626   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:07:03.480657   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:07:03.480727   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:07:03.480774   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.480795   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem -> /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.480814   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.481228   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:07:03.498650   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:07:03.516321   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:07:03.534142   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:07:03.551720   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:07:03.569392   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:07:03.588290   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:07:03.605713   32699 ssh_runner.go:195] Run: openssl version
	I0224 15:07:03.611310   32699 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0224 15:07:03.611628   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:07:03.620132   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.624143   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.624254   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.624303   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.629865   32699 command_runner.go:130] > 51391683
	I0224 15:07:03.630282   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:07:03.638632   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:07:03.646950   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.651035   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.651169   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.651217   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.656395   32699 command_runner.go:130] > 3ec20f2e
	I0224 15:07:03.656655   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:07:03.665561   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:07:03.673664   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.677880   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.677902   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.677948   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.683247   32699 command_runner.go:130] > b5213941
	I0224 15:07:03.683549   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:07:03.691777   32699 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:07:03.715388   32699 command_runner.go:130] > cgroupfs
	I0224 15:07:03.717385   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:07:03.717397   32699 cni.go:136] 2 nodes found, recommending kindnet
	I0224 15:07:03.717404   32699 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:07:03.717416   32699 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-358000 NodeName:multinode-358000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:07:03.717499   32699 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-358000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:07:03.717540   32699 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-358000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:07:03.717599   32699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 15:07:03.725322   32699 command_runner.go:130] > kubeadm
	I0224 15:07:03.725334   32699 command_runner.go:130] > kubectl
	I0224 15:07:03.725338   32699 command_runner.go:130] > kubelet
	I0224 15:07:03.726180   32699 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:07:03.726255   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0224 15:07:03.734917   32699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0224 15:07:03.749003   32699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:07:03.762036   32699 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:07:03.765876   32699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:07:03.775927   32699 host.go:66] Checking if "multinode-358000" exists ...
	I0224 15:07:03.776106   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:07:03.776145   32699 start.go:301] JoinCluster: &{Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:07:03.776227   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0224 15:07:03.776279   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:07:03.835876   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:07:04.003448   32699 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token q2oruh.uu23whj99rwoor7l --discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e 
	I0224 15:07:04.003503   32699 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 15:07:04.003530   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q2oruh.uu23whj99rwoor7l --discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-358000-m02"
	I0224 15:07:04.045831   32699 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 15:07:04.160233   32699 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0224 15:07:04.160255   32699 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0224 15:07:04.187149   32699 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:07:04.187183   32699 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:07:04.187187   32699 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 15:07:04.264594   32699 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0224 15:07:05.777610   32699 command_runner.go:130] > This node has joined the cluster:
	I0224 15:07:05.777636   32699 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0224 15:07:05.777648   32699 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0224 15:07:05.777662   32699 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0224 15:07:05.781276   32699 command_runner.go:130] ! W0224 23:07:04.044825    1234 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 15:07:05.781302   32699 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0224 15:07:05.781314   32699 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:07:05.781329   32699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q2oruh.uu23whj99rwoor7l --discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-358000-m02": (1.777735574s)
	I0224 15:07:05.781344   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0224 15:07:05.945590   32699 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0224 15:07:05.945607   32699 start.go:303] JoinCluster complete in 2.169398252s
	I0224 15:07:05.945617   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:07:05.945622   32699 cni.go:136] 2 nodes found, recommending kindnet
	I0224 15:07:05.945709   32699 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 15:07:05.950020   32699 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 15:07:05.950046   32699 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0224 15:07:05.950060   32699 command_runner.go:130] > Device: a6h/166d	Inode: 2757559     Links: 1
	I0224 15:07:05.950072   32699 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 15:07:05.950084   32699 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0224 15:07:05.950090   32699 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0224 15:07:05.950095   32699 command_runner.go:130] > Change: 2023-02-24 22:41:56.035825051 +0000
	I0224 15:07:05.950099   32699 command_runner.go:130] >  Birth: -
	I0224 15:07:05.950129   32699 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 15:07:05.950135   32699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 15:07:05.963464   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 15:07:06.127021   32699 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0224 15:07:06.129389   32699 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0224 15:07:06.131235   32699 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0224 15:07:06.140421   32699 command_runner.go:130] > daemonset.apps/kindnet configured
	I0224 15:07:06.147087   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:07:06.147288   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:07:06.147532   32699 round_trippers.go:463] GET https://127.0.0.1:58093/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 15:07:06.147539   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.147545   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.147551   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.150202   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.150214   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.150219   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.150225   32699 round_trippers.go:580]     Audit-Id: 8e1a5a25-13e9-46f1-b668-f09239e24f6c
	I0224 15:07:06.150230   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.150236   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.150249   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.150255   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.150259   32699 round_trippers.go:580]     Content-Length: 291
	I0224 15:07:06.150271   32699 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"424","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 15:07:06.150321   32699 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-358000" context rescaled to 1 replicas
	I0224 15:07:06.150336   32699 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 15:07:06.171604   32699 out.go:177] * Verifying Kubernetes components...
	I0224 15:07:06.212607   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:07:06.224599   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:07:06.283806   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:07:06.284045   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:07:06.284291   32699 node_ready.go:35] waiting up to 6m0s for node "multinode-358000-m02" to be "Ready" ...
	I0224 15:07:06.284342   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:06.284347   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.284353   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.284360   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.286858   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.286873   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.286879   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.286885   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.286895   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.286900   32699 round_trippers.go:580]     Audit-Id: 6e3d757f-5dcd-42b9-b4a4-18b4f86b6a72
	I0224 15:07:06.286911   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.286916   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.286995   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:06.287204   32699 node_ready.go:49] node "multinode-358000-m02" has status "Ready":"True"
	I0224 15:07:06.287210   32699 node_ready.go:38] duration metric: took 2.91073ms waiting for node "multinode-358000-m02" to be "Ready" ...
	I0224 15:07:06.287216   32699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:07:06.287255   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:07:06.287260   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.287265   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.287271   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.290282   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:06.290292   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.290298   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.290303   32699 round_trippers.go:580]     Audit-Id: 1084d92a-e995-4176-9a86-422a1bc76ce7
	I0224 15:07:06.290310   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.290316   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.290320   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.290329   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.291639   32699 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"472"},"items":[{"metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0224 15:07:06.293277   32699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-qfqth" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.293326   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:07:06.293332   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.293338   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.293343   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.296247   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.296260   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.296266   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.296270   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.296275   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.296284   32699 round_trippers.go:580]     Audit-Id: 964c98ff-e9c8-4271-aff4-38c92ddef0cb
	I0224 15:07:06.296291   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.296296   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.296358   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0224 15:07:06.296659   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:06.296665   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.296671   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.296681   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.299036   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.299047   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.299053   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.299058   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.299066   32699 round_trippers.go:580]     Audit-Id: 4536da28-fb2e-47de-bfa3-29108a013910
	I0224 15:07:06.299073   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.299079   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.299086   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.299175   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:06.299381   32699 pod_ready.go:92] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:06.299388   32699 pod_ready.go:81] duration metric: took 6.100854ms waiting for pod "coredns-787d4945fb-qfqth" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.299394   32699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.299435   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/etcd-multinode-358000
	I0224 15:07:06.299441   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.299447   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.299452   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.301648   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.301660   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.301667   32699 round_trippers.go:580]     Audit-Id: 9c2f7345-37eb-4278-be6e-8b09f142faf9
	I0224 15:07:06.301674   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.301679   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.301685   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.301691   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.301696   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.301763   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-358000","namespace":"kube-system","uid":"cae08591-19d2-4e50-ba6b-73cf4552218c","resourceVersion":"282","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f0e985c72569733baf436fab6e966c50","kubernetes.io/config.mirror":"f0e985c72569733baf436fab6e966c50","kubernetes.io/config.seen":"2023-02-24T23:06:20.399469529Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0224 15:07:06.302001   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:06.302008   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.302013   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.302019   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.304276   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.304286   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.304291   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.304309   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.304318   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.304323   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.304335   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.304342   32699 round_trippers.go:580]     Audit-Id: 278216a7-74a0-4953-9132-7fe06f1c8231
	I0224 15:07:06.304435   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:06.304637   32699 pod_ready.go:92] pod "etcd-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:06.304644   32699 pod_ready.go:81] duration metric: took 5.245151ms waiting for pod "etcd-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.304652   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.304683   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-358000
	I0224 15:07:06.304688   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.304694   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.304699   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.306747   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.306758   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.306776   32699 round_trippers.go:580]     Audit-Id: 6d38573a-89e7-4a39-8bcf-4f782c2ebee9
	I0224 15:07:06.306784   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.306789   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.306794   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.306799   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.306805   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.306879   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-358000","namespace":"kube-system","uid":"9f99728a-c30f-46f0-aa6c-914ce4f95c85","resourceVersion":"385","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"59d82eb7ee48c62b0e1b1157e56efad8","kubernetes.io/config.mirror":"59d82eb7ee48c62b0e1b1157e56efad8","kubernetes.io/config.seen":"2023-02-24T23:06:20.399481307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0224 15:07:06.307142   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:06.307148   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.307153   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.307159   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.309296   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.309308   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.309314   32699 round_trippers.go:580]     Audit-Id: 51a481e4-c5d0-4e1f-ba25-a55846d0a9c9
	I0224 15:07:06.309319   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.309324   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.309332   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.309337   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.309342   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.309410   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:06.309618   32699 pod_ready.go:92] pod "kube-apiserver-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:06.309624   32699 pod_ready.go:81] duration metric: took 4.966728ms waiting for pod "kube-apiserver-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.309629   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.309662   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-358000
	I0224 15:07:06.309667   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.309672   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.309678   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.311852   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.311861   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.311867   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.311874   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.311880   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.311885   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.311890   32699 round_trippers.go:580]     Audit-Id: 7e3fe730-4bb1-4bcf-bcce-b09fe46d181f
	I0224 15:07:06.311896   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.311967   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-358000","namespace":"kube-system","uid":"6d26b160-2631-4696-9633-0da5de0f9e6c","resourceVersion":"284","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"caa2ad2bdf9da4ce166e0faa9958c6ef","kubernetes.io/config.mirror":"caa2ad2bdf9da4ce166e0faa9958c6ef","kubernetes.io/config.seen":"2023-02-24T23:06:20.399482388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0224 15:07:06.312234   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:06.312240   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.312247   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.312253   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.314475   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.314485   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.314490   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.314496   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.314503   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.314510   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.314516   32699 round_trippers.go:580]     Audit-Id: e20671b9-489b-4c27-ae74-d388c74639e5
	I0224 15:07:06.314521   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.314646   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:06.314818   32699 pod_ready.go:92] pod "kube-controller-manager-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:06.314824   32699 pod_ready.go:81] duration metric: took 5.189582ms waiting for pod "kube-controller-manager-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.314831   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-855bv" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.484613   32699 request.go:622] Waited for 169.72075ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:06.484730   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:06.484742   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.484754   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.484771   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.488656   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:06.488671   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.488679   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.488687   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.488700   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.488708   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.488714   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.488723   32699 round_trippers.go:580]     Audit-Id: f4b6075d-239b-4ecc-b833-c0355e38dcb2
	I0224 15:07:06.488795   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"460","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0224 15:07:06.685124   32699 request.go:622] Waited for 196.048239ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:06.685219   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:06.685231   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.685247   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.685258   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.689259   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:06.689277   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.689289   32699 round_trippers.go:580]     Audit-Id: 89f7580c-6127-46ab-9cbe-0a044089cc61
	I0224 15:07:06.689296   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.689304   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.689310   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.689317   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.689333   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.689563   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:07.191265   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:07.191284   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:07.191309   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:07.191319   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:07.195096   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:07.195115   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:07.195124   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:07.195131   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:07.195137   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:07 GMT
	I0224 15:07:07.195144   32699 round_trippers.go:580]     Audit-Id: 39f72e14-9b49-4e46-9c4b-f0c3c0235722
	I0224 15:07:07.195149   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:07.195154   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:07.195263   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:07.195563   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:07.195573   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:07.195580   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:07.195587   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:07.199379   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:07.199399   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:07.199409   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:07.199416   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:07.199423   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:07 GMT
	I0224 15:07:07.199432   32699 round_trippers.go:580]     Audit-Id: 2d1a02b9-6659-4684-a6c9-198dcfa57521
	I0224 15:07:07.199440   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:07.199448   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:07.199530   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:07.691350   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:07.691371   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:07.691383   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:07.691393   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:07.695277   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:07.695297   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:07.695315   32699 round_trippers.go:580]     Audit-Id: a9ab69ec-8dc2-448e-9baa-f8ffd84e4fc4
	I0224 15:07:07.695323   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:07.695333   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:07.695339   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:07.695343   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:07.695349   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:07 GMT
	I0224 15:07:07.695418   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:07.695672   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:07.695678   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:07.695684   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:07.695691   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:07.697436   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:07:07.697445   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:07.697450   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:07.697456   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:07.697460   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:07.697467   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:07 GMT
	I0224 15:07:07.697472   32699 round_trippers.go:580]     Audit-Id: d4362a29-7af3-440d-83a6-1e7309470ca4
	I0224 15:07:07.697477   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:07.697587   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:08.191408   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:08.191433   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:08.191445   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:08.191455   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:08.195566   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:07:08.195580   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:08.195586   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:08.195590   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:08.195595   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:08 GMT
	I0224 15:07:08.195600   32699 round_trippers.go:580]     Audit-Id: f98f2627-f131-4820-a004-c737559e1abd
	I0224 15:07:08.195605   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:08.195611   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:08.195678   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:08.195949   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:08.195956   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:08.195962   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:08.195966   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:08.198345   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:08.198360   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:08.198368   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:08.198376   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:08 GMT
	I0224 15:07:08.198381   32699 round_trippers.go:580]     Audit-Id: 4875b40c-1275-4b35-9b4c-380f915833d1
	I0224 15:07:08.198386   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:08.198391   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:08.198396   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:08.198449   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:08.691379   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:08.691399   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:08.691411   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:08.691426   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:08.695316   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:08.695326   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:08.695332   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:08 GMT
	I0224 15:07:08.695342   32699 round_trippers.go:580]     Audit-Id: d55384d3-a37c-494b-beb5-4b7038e4fbf1
	I0224 15:07:08.695347   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:08.695352   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:08.695357   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:08.695363   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:08.695433   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:08.695729   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:08.695737   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:08.695744   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:08.695751   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:08.698071   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:08.698084   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:08.698089   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:08.698094   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:08 GMT
	I0224 15:07:08.698100   32699 round_trippers.go:580]     Audit-Id: 65d9d778-b0ae-4437-9e8e-c9aea36028dc
	I0224 15:07:08.698105   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:08.698112   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:08.698117   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:08.698382   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:08.698544   32699 pod_ready.go:102] pod "kube-proxy-855bv" in "kube-system" namespace has status "Ready":"False"
	I0224 15:07:09.191417   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:09.191442   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:09.191498   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:09.191506   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:09.194842   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:09.194852   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:09.194858   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:09 GMT
	I0224 15:07:09.194863   32699 round_trippers.go:580]     Audit-Id: 7789971f-de57-4dd3-9c30-03ed9f1005f6
	I0224 15:07:09.194868   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:09.194872   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:09.194877   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:09.194882   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:09.194944   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:09.195201   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:09.195207   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:09.195213   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:09.195218   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:09.197143   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:07:09.197152   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:09.197157   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:09.197162   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:09.197168   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:09.197172   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:09.197177   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:09 GMT
	I0224 15:07:09.197204   32699 round_trippers.go:580]     Audit-Id: 5775bf86-a0ad-45cf-92b1-49a787831366
	I0224 15:07:09.197252   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:09.691328   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:09.691343   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:09.691350   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:09.691355   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:09.694491   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:09.694502   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:09.694508   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:09.694513   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:09.694517   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:09.694522   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:09 GMT
	I0224 15:07:09.694527   32699 round_trippers.go:580]     Audit-Id: 27ae4372-bbf9-413d-b01a-b6c955f1401b
	I0224 15:07:09.694532   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:09.694589   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:09.694873   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:09.694881   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:09.694887   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:09.694893   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:09.697022   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:09.697035   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:09.697041   32699 round_trippers.go:580]     Audit-Id: b0425882-56c2-44a4-b1e1-c1f0b9296433
	I0224 15:07:09.697046   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:09.697053   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:09.697060   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:09.697065   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:09.697070   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:09 GMT
	I0224 15:07:09.697130   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:10.191352   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:10.191367   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.191388   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.191397   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.193981   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.193993   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.193999   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.194004   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.194009   32699 round_trippers.go:580]     Audit-Id: 10d1fbcf-729e-4ddb-a250-aead773308b7
	I0224 15:07:10.194014   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.194023   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.194032   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.194306   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"484","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0224 15:07:10.194581   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:10.194587   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.194593   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.194599   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.196691   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.196699   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.196705   32699 round_trippers.go:580]     Audit-Id: ca64cdd8-d309-4dc3-8168-761455f445ed
	I0224 15:07:10.196712   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.196718   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.196722   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.196728   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.196732   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.196770   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:10.196919   32699 pod_ready.go:92] pod "kube-proxy-855bv" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:10.196927   32699 pod_ready.go:81] duration metric: took 3.881975044s waiting for pod "kube-proxy-855bv" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.196933   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rsf5q" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.196967   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-rsf5q
	I0224 15:07:10.196972   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.196977   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.196982   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.199321   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.199330   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.199336   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.199341   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.199347   32699 round_trippers.go:580]     Audit-Id: 44258eab-9e2d-46de-baf7-b13ecd40fca8
	I0224 15:07:10.199353   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.199358   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.199363   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.199413   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rsf5q","generateName":"kube-proxy-","namespace":"kube-system","uid":"34fab1a9-3416-47c1-9239-d7276b496a73","resourceVersion":"389","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0224 15:07:10.199643   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:10.199649   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.199656   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.199661   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.201483   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:07:10.201493   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.201498   32699 round_trippers.go:580]     Audit-Id: fe4c3f80-902a-4e1e-a7de-7c377863a649
	I0224 15:07:10.201504   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.201509   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.201516   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.201521   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.201528   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.201598   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:10.201784   32699 pod_ready.go:92] pod "kube-proxy-rsf5q" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:10.201789   32699 pod_ready.go:81] duration metric: took 4.852188ms waiting for pod "kube-proxy-rsf5q" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.201795   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.201821   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-358000
	I0224 15:07:10.201825   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.201831   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.201836   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.204079   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.204087   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.204092   32699 round_trippers.go:580]     Audit-Id: 1da7862d-69ad-435e-8344-14f7c22bbfdc
	I0224 15:07:10.204097   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.204103   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.204107   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.204112   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.204117   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.204169   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-358000","namespace":"kube-system","uid":"f1b648f4-a02a-4931-a791-578a6dba081f","resourceVersion":"281","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1df55cdda41cb4d1ff214c8ea21fdc45","kubernetes.io/config.mirror":"1df55cdda41cb4d1ff214c8ea21fdc45","kubernetes.io/config.seen":"2023-02-24T23:06:20.399486321Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0224 15:07:10.284513   32699 request.go:622] Waited for 80.136449ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:10.284560   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:10.284567   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.284574   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.284580   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.287491   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.287503   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.287509   32699 round_trippers.go:580]     Audit-Id: cb868c95-c0a8-4c55-a0ed-33e653c10f77
	I0224 15:07:10.287514   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.287519   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.287524   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.287528   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.287533   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.287666   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:10.287873   32699 pod_ready.go:92] pod "kube-scheduler-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:10.287879   32699 pod_ready.go:81] duration metric: took 86.077674ms waiting for pod "kube-scheduler-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.287886   32699 pod_ready.go:38] duration metric: took 4.000542645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:07:10.287896   32699 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 15:07:10.287939   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:07:10.299727   32699 system_svc.go:56] duration metric: took 11.825115ms WaitForService to wait for kubelet.
	I0224 15:07:10.299742   32699 kubeadm.go:578] duration metric: took 4.149265905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 15:07:10.299756   32699 node_conditions.go:102] verifying NodePressure condition ...
	I0224 15:07:10.485185   32699 request.go:622] Waited for 185.375168ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes
	I0224 15:07:10.485251   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes
	I0224 15:07:10.485269   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.485285   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.485297   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.489590   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:07:10.489603   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.489609   32699 round_trippers.go:580]     Audit-Id: b455aba7-b8f2-4217-bd03-7bbe148d9a21
	I0224 15:07:10.489614   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.489619   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.489626   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.489632   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.489637   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.489738   32699 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"486"},"items":[{"metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10175 chars]
	I0224 15:07:10.490109   32699 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:07:10.490119   32699 node_conditions.go:123] node cpu capacity is 6
	I0224 15:07:10.490129   32699 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:07:10.490133   32699 node_conditions.go:123] node cpu capacity is 6
	I0224 15:07:10.490136   32699 node_conditions.go:105] duration metric: took 190.370203ms to run NodePressure ...
	I0224 15:07:10.490143   32699 start.go:228] waiting for startup goroutines ...
	I0224 15:07:10.490177   32699 start.go:242] writing updated cluster config ...
	I0224 15:07:10.490579   32699 ssh_runner.go:195] Run: rm -f paused
	I0224 15:07:10.529016   32699 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0224 15:07:10.550662   32699 out.go:177] * Done! kubectl is now configured to use "multinode-358000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 23:06:01 UTC, end at Fri 2023-02-24 23:07:20 UTC. --
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085577554Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085603026Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085613126Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085661876Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085686954Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085707186Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085753305Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085830710Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085857726Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.086481124Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.086529964Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.087031682Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.094537926Z" level=info msg="Loading containers: start."
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.173169539Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.206022332Z" level=info msg="Loading containers: done."
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.214236565Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.214341616Z" level=info msg="Daemon has completed initialization"
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.235611005Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 23:06:05 multinode-358000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.239437575Z" level=info msg="API listen on [::]:2376"
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.244656211Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 24 23:06:47 multinode-358000 dockerd[831]: time="2023-02-24T23:06:47.972912411Z" level=info msg="ignoring event" container=0739052e07226c5f180b03d44a1d09595b6474cfeac246dd1800195c74339f9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:06:48 multinode-358000 dockerd[831]: time="2023-02-24T23:06:48.086284897Z" level=info msg="ignoring event" container=a0cf71fcfe55092a281a02d546d3a236123195e4007be424f5e9784c12f57587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:06:48 multinode-358000 dockerd[831]: time="2023-02-24T23:06:48.486771907Z" level=info msg="ignoring event" container=91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:06:48 multinode-358000 dockerd[831]: time="2023-02-24T23:06:48.594340397Z" level=info msg="ignoring event" container=0a3266320057592f368ad3c52aba426612addb694e2e2af650e88909c7add2a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	4da301117c47a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 seconds ago        Running             busybox                   0                   f0887f262b1d2
	c78cf81da9f02       5185b96f0becf                                                                                         32 seconds ago       Running             coredns                   1                   9fa4123cb816a
	45b6781b3c7fc       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              42 seconds ago       Running             kindnet-cni               0                   d729c67799665
	3777a98837330       6e38f40d628db                                                                                         45 seconds ago       Running             storage-provisioner       0                   fab46a66ddac6
	0739052e07226       5185b96f0becf                                                                                         46 seconds ago       Exited              coredns                   0                   a0cf71fcfe550
	40f2d805fba78       46a6bb3c77ce0                                                                                         46 seconds ago       Running             kube-proxy                0                   320666295bb52
	46937fcaeefed       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   411439c0f9588
	5c5051f9acbc0       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   06ec17b1a9fc2
	6dd5e22701b0a       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   a18e7ab9864c2
	b06dd1eae15b5       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   94053a2f077b5
	
	* 
	* ==> coredns [0739052e0722] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 6913905827935292786.8584660205220874109. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 6913905827935292786.8584660205220874109. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [c78cf81da9f0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:35475 - 12806 "HINFO IN 6966946626238033810.3083996661101390669. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01412589s
	[INFO] 10.244.0.3:51576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170312s
	[INFO] 10.244.0.3:49069 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.051019823s
	[INFO] 10.244.0.3:36866 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003480383s
	[INFO] 10.244.0.3:52548 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011924194s
	[INFO] 10.244.0.3:57435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172307s
	[INFO] 10.244.0.3:50752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00487532s
	[INFO] 10.244.0.3:37941 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00016835s
	[INFO] 10.244.0.3:54364 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001313s
	[INFO] 10.244.0.3:36795 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004073706s
	[INFO] 10.244.0.3:57809 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115928s
	[INFO] 10.244.0.3:51279 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009685s
	[INFO] 10.244.0.3:54221 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201848s
	[INFO] 10.244.0.3:37061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132724s
	[INFO] 10.244.0.3:39696 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108898s
	[INFO] 10.244.0.3:50904 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088582s
	[INFO] 10.244.0.3:57173 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098117s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-358000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-358000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08976559d74fb9c2654733dc21cb8f9d9ec24374
	                    minikube.k8s.io/name=multinode-358000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_24T15_06_21_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 23:06:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-358000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 23:07:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 23:06:51 +0000   Fri, 24 Feb 2023 23:06:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 23:06:51 +0000   Fri, 24 Feb 2023 23:06:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 23:06:51 +0000   Fri, 24 Feb 2023 23:06:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 23:06:51 +0000   Fri, 24 Feb 2023 23:06:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-358000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    892ae553-d6f4-4035-a8a5-8b0131f3b246
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-tnqbs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 coredns-787d4945fb-qfqth                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     48s
	  kube-system                 etcd-multinode-358000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         61s
	  kube-system                 kindnet-894f4                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      48s
	  kube-system                 kube-apiserver-multinode-358000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-controller-manager-multinode-358000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-proxy-rsf5q                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kube-scheduler-multinode-358000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 46s   kube-proxy       
	  Normal  Starting                 61s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  61s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s   kubelet          Node multinode-358000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s   kubelet          Node multinode-358000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s   kubelet          Node multinode-358000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s   node-controller  Node multinode-358000 event: Registered Node multinode-358000 in Controller
	
	
	Name:               multinode-358000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-358000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 23:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-358000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 23:07:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 23:07:05 +0000   Fri, 24 Feb 2023 23:07:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 23:07:05 +0000   Fri, 24 Feb 2023 23:07:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 23:07:05 +0000   Fri, 24 Feb 2023 23:07:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 23:07:05 +0000   Fri, 24 Feb 2023 23:07:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-358000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    892ae553-d6f4-4035-a8a5-8b0131f3b246
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-5zqv7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 kindnet-5qvwr               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      16s
	  kube-system                 kube-proxy-855bv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  Starting                 17s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17s (x2 over 17s)  kubelet          Node multinode-358000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x2 over 17s)  kubelet          Node multinode-358000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x2 over 17s)  kubelet          Node multinode-358000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16s                kubelet          Node multinode-358000-m02 status is now: NodeReady
	  Normal  RegisteredNode           14s                node-controller  Node multinode-358000-m02 event: Registered Node multinode-358000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000064] FS-Cache: O-key=[8] '235dc60400000000'
	[  +0.000061] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000043] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=000000001a032d23
	[  +0.000078] FS-Cache: N-key=[8] '235dc60400000000'
	[  +0.003038] FS-Cache: Duplicate cookie detected
	[  +0.000092] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000045] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=00000000c09690c2
	[  +0.000073] FS-Cache: O-key=[8] '235dc60400000000'
	[  +0.000050] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000048] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=00000000fa0a547a
	[  +0.000052] FS-Cache: N-key=[8] '235dc60400000000'
	[  +3.553193] FS-Cache: Duplicate cookie detected
	[  +0.000091] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000052] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=0000000081c9d0cb
	[  +0.000059] FS-Cache: O-key=[8] '225dc60400000000'
	[  +0.000031] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000052] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=0000000011fb7533
	[  +0.000047] FS-Cache: N-key=[8] '225dc60400000000'
	[  +0.400852] FS-Cache: Duplicate cookie detected
	[  +0.000038] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000057] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=00000000dd227ced
	[  +0.000061] FS-Cache: O-key=[8] '2b5dc60400000000'
	[  +0.000046] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000033] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=00000000cab77509
	[  +0.000067] FS-Cache: N-key=[8] '2b5dc60400000000'
	
	* 
	* ==> etcd [46937fcaeefe] <==
	* {"level":"info","ts":"2023-02-24T23:06:15.387Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T23:06:15.387Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T23:06:15.387Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T23:06:16.375Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-358000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T23:06:16.375Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T23:06:16.375Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T23:06:16.375Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:06:16.376Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T23:06:16.376Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T23:06:20.138Z","caller":"traceutil/trace.go:171","msg":"trace[637080141] transaction","detail":"{read_only:false; response_revision:217; number_of_response:1; }","duration":"115.330473ms","start":"2023-02-24T23:06:20.023Z","end":"2023-02-24T23:06:20.138Z","steps":["trace[637080141] 'process raft request'  (duration: 115.291218ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-24T23:06:20.138Z","caller":"traceutil/trace.go:171","msg":"trace[972468534] transaction","detail":"{read_only:false; response_revision:216; number_of_response:1; }","duration":"134.148297ms","start":"2023-02-24T23:06:20.004Z","end":"2023-02-24T23:06:20.138Z","steps":["trace[972468534] 'process raft request'  (duration: 95.186825ms)","trace[972468534] 'compare'  (duration: 38.392219ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-24T23:06:55.420Z","caller":"traceutil/trace.go:171","msg":"trace[1680627844] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"220.920869ms","start":"2023-02-24T23:06:55.198Z","end":"2023-02-24T23:06:55.420Z","steps":["trace[1680627844] 'process raft request'  (duration: 220.813103ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-24T23:06:57.687Z","caller":"traceutil/trace.go:171","msg":"trace[1695407084] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"262.174178ms","start":"2023-02-24T23:06:57.425Z","end":"2023-02-24T23:06:57.687Z","steps":["trace[1695407084] 'process raft request'  (duration: 262.082425ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:07:21 up  2:06,  0 users,  load average: 1.92, 1.19, 0.88
	Linux multinode-358000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [45b6781b3c7f] <==
	* I0224 23:06:38.554773       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0224 23:06:38.554887       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0224 23:06:38.555020       1 main.go:116] setting mtu 1500 for CNI 
	I0224 23:06:38.555035       1 main.go:146] kindnetd IP family: "ipv4"
	I0224 23:06:38.555051       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0224 23:06:39.154874       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:06:39.154932       1 main.go:227] handling current node
	I0224 23:06:49.265414       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:06:49.265453       1 main.go:227] handling current node
	I0224 23:06:59.278084       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:06:59.278159       1 main.go:227] handling current node
	I0224 23:07:09.282983       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:07:09.283024       1 main.go:227] handling current node
	I0224 23:07:09.283032       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 23:07:09.283036       1 main.go:250] Node multinode-358000-m02 has CIDR [10.244.1.0/24] 
	I0224 23:07:09.283135       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0224 23:07:19.294453       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:07:19.294493       1 main.go:227] handling current node
	I0224 23:07:19.294501       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 23:07:19.294505       1 main.go:250] Node multinode-358000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [b06dd1eae15b] <==
	* I0224 23:06:17.509104       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 23:06:17.513317       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 23:06:17.513410       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 23:06:17.513469       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 23:06:17.513926       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 23:06:17.513972       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 23:06:17.514019       1 cache.go:39] Caches are synced for autoregister controller
	I0224 23:06:17.514064       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 23:06:17.514775       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 23:06:18.233048       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 23:06:18.418934       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0224 23:06:18.421549       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0224 23:06:18.421630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 23:06:18.855768       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 23:06:18.888482       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 23:06:18.967446       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0224 23:06:18.973225       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0224 23:06:18.973773       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 23:06:18.977909       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 23:06:19.474335       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 23:06:20.307048       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 23:06:20.316971       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0224 23:06:20.324072       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 23:06:32.782998       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0224 23:06:33.253037       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [5c5051f9acbc] <==
	* I0224 23:06:32.942422       1 shared_informer.go:280] Caches are synced for expand
	I0224 23:06:32.976915       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0224 23:06:32.983276       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 23:06:33.028180       1 shared_informer.go:280] Caches are synced for endpoint
	I0224 23:06:33.060401       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 23:06:33.100541       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0224 23:06:33.203306       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-tkkfd"
	I0224 23:06:33.209150       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-qfqth"
	I0224 23:06:33.266092       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rsf5q"
	I0224 23:06:33.271432       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-894f4"
	I0224 23:06:33.271454       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-tkkfd"
	I0224 23:06:33.371655       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 23:06:33.381422       1 event.go:294] "Event occurred" object="kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/kube-dns: endpoints \"kube-dns\" already exists"
	I0224 23:06:33.389034       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 23:06:33.389085       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	W0224 23:07:05.165358       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-358000-m02" does not exist
	I0224 23:07:05.169839       1 range_allocator.go:372] Set node multinode-358000-m02 PodCIDR to [10.244.1.0/24]
	I0224 23:07:05.172782       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5qvwr"
	I0224 23:07:05.176589       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-855bv"
	W0224 23:07:05.782830       1 topologycache.go:232] Can't get CPU or zone information for multinode-358000-m02 node
	W0224 23:07:07.783813       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-358000-m02. Assuming now as a timestamp.
	I0224 23:07:07.783966       1 event.go:294] "Event occurred" object="multinode-358000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-358000-m02 event: Registered Node multinode-358000-m02 in Controller"
	I0224 23:07:11.523322       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0224 23:07:11.570993       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-5zqv7"
	I0224 23:07:11.575925       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-tnqbs"
	
	* 
	* ==> kube-proxy [40f2d805fba7] <==
	* I0224 23:06:34.374638       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0224 23:06:34.374760       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0224 23:06:34.374802       1 server_others.go:535] "Using iptables proxy"
	I0224 23:06:34.397988       1 server_others.go:176] "Using iptables Proxier"
	I0224 23:06:34.398039       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0224 23:06:34.398047       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0224 23:06:34.398062       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0224 23:06:34.398083       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 23:06:34.398621       1 server.go:655] "Version info" version="v1.26.1"
	I0224 23:06:34.398676       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 23:06:34.399563       1 config.go:317] "Starting service config controller"
	I0224 23:06:34.399592       1 config.go:444] "Starting node config controller"
	I0224 23:06:34.399598       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 23:06:34.399594       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 23:06:34.399662       1 config.go:226] "Starting endpoint slice config controller"
	I0224 23:06:34.399666       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 23:06:34.500659       1 shared_informer.go:280] Caches are synced for node config
	I0224 23:06:34.500733       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0224 23:06:34.500754       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [6dd5e22701b0] <==
	* W0224 23:06:17.469793       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 23:06:17.469804       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 23:06:17.469847       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:17.469882       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0224 23:06:17.469956       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:17.469994       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0224 23:06:17.470030       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0224 23:06:17.470038       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0224 23:06:17.470152       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 23:06:17.470331       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0224 23:06:17.470310       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:17.470587       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0224 23:06:18.401705       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:18.401776       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0224 23:06:18.477493       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0224 23:06:18.477575       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0224 23:06:18.572201       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0224 23:06:18.572257       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0224 23:06:18.599464       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 23:06:18.599512       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0224 23:06:18.614617       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 23:06:18.614662       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 23:06:18.652944       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:18.653027       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0224 23:06:19.066401       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 23:06:01 UTC, end at Fri 2023-02-24 23:07:22 UTC. --
	Feb 24 23:06:34 multinode-358000 kubelet[2178]: I0224 23:06:34.789670    2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d729c67799665b2f08432f392aacc1af82748696182361f23b53e44abdfff4f9"
	Feb 24 23:06:36 multinode-358000 kubelet[2178]: I0224 23:06:36.017030    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rsf5q" podStartSLOduration=3.017002562 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:36.016827563 +0000 UTC m=+15.727492732" watchObservedRunningTime="2023-02-24 23:06:36.017002562 +0000 UTC m=+15.727667730"
	Feb 24 23:06:36 multinode-358000 kubelet[2178]: I0224 23:06:36.417413    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-tkkfd" podStartSLOduration=3.41738528 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:36.417181529 +0000 UTC m=+16.127846698" watchObservedRunningTime="2023-02-24 23:06:36.41738528 +0000 UTC m=+16.128050449"
	Feb 24 23:06:36 multinode-358000 kubelet[2178]: I0224 23:06:36.817487    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-qfqth" podStartSLOduration=3.8174574850000003 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:36.817257311 +0000 UTC m=+16.527922481" watchObservedRunningTime="2023-02-24 23:06:36.817457485 +0000 UTC m=+16.528122654"
	Feb 24 23:06:37 multinode-358000 kubelet[2178]: I0224 23:06:37.216766    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.21673015 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:37.216544795 +0000 UTC m=+16.927209960" watchObservedRunningTime="2023-02-24 23:06:37.21673015 +0000 UTC m=+16.927395310"
	Feb 24 23:06:41 multinode-358000 kubelet[2178]: I0224 23:06:41.055369    2178 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 23:06:41 multinode-358000 kubelet[2178]: I0224 23:06:41.056061    2178 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.706381    2178 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c9979a-3811-4b07-aa6d-4d766209627d-config-volume\") pod \"75c9979a-3811-4b07-aa6d-4d766209627d\" (UID: \"75c9979a-3811-4b07-aa6d-4d766209627d\") "
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.706498    2178 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbbs9\" (UniqueName: \"kubernetes.io/projected/75c9979a-3811-4b07-aa6d-4d766209627d-kube-api-access-nbbs9\") pod \"75c9979a-3811-4b07-aa6d-4d766209627d\" (UID: \"75c9979a-3811-4b07-aa6d-4d766209627d\") "
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: W0224 23:06:48.707141    2178 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/75c9979a-3811-4b07-aa6d-4d766209627d/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.707442    2178 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75c9979a-3811-4b07-aa6d-4d766209627d-config-volume" (OuterVolumeSpecName: "config-volume") pod "75c9979a-3811-4b07-aa6d-4d766209627d" (UID: "75c9979a-3811-4b07-aa6d-4d766209627d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.709579    2178 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c9979a-3811-4b07-aa6d-4d766209627d-kube-api-access-nbbs9" (OuterVolumeSpecName: "kube-api-access-nbbs9") pod "75c9979a-3811-4b07-aa6d-4d766209627d" (UID: "75c9979a-3811-4b07-aa6d-4d766209627d"). InnerVolumeSpecName "kube-api-access-nbbs9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.807004    2178 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-nbbs9\" (UniqueName: \"kubernetes.io/projected/75c9979a-3811-4b07-aa6d-4d766209627d-kube-api-access-nbbs9\") on node \"multinode-358000\" DevicePath \"\""
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.807109    2178 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c9979a-3811-4b07-aa6d-4d766209627d-config-volume\") on node \"multinode-358000\" DevicePath \"\""
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.973537    2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0cf71fcfe55092a281a02d546d3a236123195e4007be424f5e9784c12f57587"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.979614    2178 scope.go:115] "RemoveContainer" containerID="91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.995214    2178 scope.go:115] "RemoveContainer" containerID="91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: E0224 23:06:48.996270    2178 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3" containerID="91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.996329    2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3} err="failed to get container status \"91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3\": rpc error: code = Unknown desc = Error: No such container: 91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.997574    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-894f4" podStartSLOduration=-9.223372020857225e+09 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="2023-02-24 23:06:34.678974996 +0000 UTC m=+14.389640156" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:38.874057285 +0000 UTC m=+18.584722454" watchObservedRunningTime="2023-02-24 23:06:48.997549812 +0000 UTC m=+28.708214981"
	Feb 24 23:06:50 multinode-358000 kubelet[2178]: I0224 23:06:50.493061    2178 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=75c9979a-3811-4b07-aa6d-4d766209627d path="/var/lib/kubelet/pods/75c9979a-3811-4b07-aa6d-4d766209627d/volumes"
	Feb 24 23:07:11 multinode-358000 kubelet[2178]: I0224 23:07:11.581434    2178 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 23:07:11 multinode-358000 kubelet[2178]: E0224 23:07:11.581514    2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75c9979a-3811-4b07-aa6d-4d766209627d" containerName="coredns"
	Feb 24 23:07:11 multinode-358000 kubelet[2178]: I0224 23:07:11.581538    2178 memory_manager.go:346] "RemoveStaleState removing state" podUID="75c9979a-3811-4b07-aa6d-4d766209627d" containerName="coredns"
	Feb 24 23:07:11 multinode-358000 kubelet[2178]: I0224 23:07:11.672771    2178 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55l6\" (UniqueName: \"kubernetes.io/projected/e12ec6d0-ab35-4586-85c7-f1e53343d029-kube-api-access-b55l6\") pod \"busybox-6b86dd6d48-tnqbs\" (UID: \"e12ec6d0-ab35-4586-85c7-f1e53343d029\") " pod="default/busybox-6b86dd6d48-tnqbs"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-358000 -n multinode-358000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-358000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (11.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-358000 -- exec busybox-6b86dd6d48-5zqv7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikube host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-358000
helpers_test.go:235: (dbg) docker inspect multinode-358000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8",
	        "Created": "2023-02-24T23:06:00.811874367Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475219,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:06:01.100784819Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8/hosts",
	        "LogPath": "/var/lib/docker/containers/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8/7a49b2d313bc19edfab0cb3cbe5fc2adbcdcb89e3ecca6b42f4f3caa0dee30a8-json.log",
	        "Name": "/multinode-358000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-358000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-358000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/01af4a4fa106522bbb3042ade1c0f83ad8608557aa6739ee9122a49a0e2e002f-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/01af4a4fa106522bbb3042ade1c0f83ad8608557aa6739ee9122a49a0e2e002f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/01af4a4fa106522bbb3042ade1c0f83ad8608557aa6739ee9122a49a0e2e002f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/01af4a4fa106522bbb3042ade1c0f83ad8608557aa6739ee9122a49a0e2e002f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-358000",
	                "Source": "/var/lib/docker/volumes/multinode-358000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-358000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-358000",
	                "name.minikube.sigs.k8s.io": "multinode-358000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b870af0eff0f496d738300826c27df29ab50762fd500f6d77e77cfc70c35ff37",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58096"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58093"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b870af0eff0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-358000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a49b2d313bc",
	                        "multinode-358000"
	                    ],
	                    "NetworkID": "0c9844f869c1c112c7c27c3cf5d33f464f5933c29bc5fe8a123a6550e7d34275",
	                    "EndpointID": "3676bd97086239e08187252245de2e154436a8a683baf61dbaf7da73343aabaa",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-358000 -n multinode-358000
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-358000 logs -n 25: (2.546244368s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p first-289000                                   | first-289000         | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| start   | -p mount-start-1-857000                           | mount-start-1-857000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-1-857000 ssh -- ls                    | mount-start-1-857000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-870000                           | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-870000 ssh -- ls                    | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-857000                           | mount-start-1-857000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-870000 ssh -- ls                    | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-870000                           | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| start   | -p mount-start-2-870000                           | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| ssh     | mount-start-2-870000 ssh -- ls                    | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-870000                           | mount-start-2-870000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| delete  | -p mount-start-1-857000                           | mount-start-1-857000 | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:05 PST |
	| start   | -p multinode-358000                               | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:05 PST | 24 Feb 23 15:07 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- apply -f                   | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- rollout                    | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- get pods -o                | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- get pods -o                | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST |                     |
	|         | busybox-6b86dd6d48-5zqv7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | busybox-6b86dd6d48-tnqbs --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST |                     |
	|         | busybox-6b86dd6d48-5zqv7 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | busybox-6b86dd6d48-tnqbs --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST |                     |
	|         | busybox-6b86dd6d48-5zqv7 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | busybox-6b86dd6d48-tnqbs -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- get pods -o                | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-358000 -- exec                       | multinode-358000     | jenkins | v1.29.0 | 24 Feb 23 15:07 PST | 24 Feb 23 15:07 PST |
	|         | busybox-6b86dd6d48-5zqv7                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 15:05:52
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 15:05:52.700078   32699 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:05:52.700243   32699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:05:52.700248   32699 out.go:309] Setting ErrFile to fd 2...
	I0224 15:05:52.700251   32699 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:05:52.700359   32699 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:05:52.701724   32699 out.go:303] Setting JSON to false
	I0224 15:05:52.719942   32699 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7526,"bootTime":1677272426,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 15:05:52.720068   32699 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 15:05:52.742122   32699 out.go:177] * [multinode-358000] minikube v1.29.0 on Darwin 13.2.1
	I0224 15:05:52.785190   32699 notify.go:220] Checking for updates...
	I0224 15:05:52.807183   32699 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 15:05:52.829307   32699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:05:52.851060   32699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 15:05:52.872078   32699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 15:05:52.893322   32699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 15:05:52.915118   32699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 15:05:52.936298   32699 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 15:05:52.998124   32699 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 15:05:52.998262   32699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:05:53.140490   32699 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 23:05:53.047731623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:05:53.162589   32699 out.go:177] * Using the docker driver based on user configuration
	I0224 15:05:53.184065   32699 start.go:296] selected driver: docker
	I0224 15:05:53.184098   32699 start.go:857] validating driver "docker" against <nil>
	I0224 15:05:53.184117   32699 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 15:05:53.188041   32699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:05:53.329111   32699 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 23:05:53.236765216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:05:53.329243   32699 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 15:05:53.329418   32699 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 15:05:53.351348   32699 out.go:177] * Using Docker Desktop driver with root privileges
	I0224 15:05:53.372891   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:05:53.372974   32699 cni.go:136] 0 nodes found, recommending kindnet
	I0224 15:05:53.372990   32699 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0224 15:05:53.373007   32699 start_flags.go:319] config:
	{Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:05:53.415868   32699 out.go:177] * Starting control plane node multinode-358000 in cluster multinode-358000
	I0224 15:05:53.437132   32699 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:05:53.458831   32699 out.go:177] * Pulling base image ...
	I0224 15:05:53.501132   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:05:53.501193   32699 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:05:53.501240   32699 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 15:05:53.501262   32699 cache.go:57] Caching tarball of preloaded images
	I0224 15:05:53.501489   32699 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:05:53.501508   32699 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 15:05:53.503803   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:05:53.503859   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json: {Name:mka69897b551e7928bc6b44fce9cad263e070669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:05:53.577571   32699 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:05:53.577615   32699 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:05:53.577682   32699 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:05:53.577742   32699 start.go:364] acquiring machines lock for multinode-358000: {Name:mk212d26ea22c7f1fb6b8f9cd0233a6686bc192d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:05:53.577973   32699 start.go:368] acquired machines lock for "multinode-358000" in 212.735µs
	I0224 15:05:53.578014   32699 start.go:93] Provisioning new machine with config: &{Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 15:05:53.578135   32699 start.go:125] createHost starting for "" (driver="docker")
	I0224 15:05:53.621823   32699 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 15:05:53.622290   32699 start.go:159] libmachine.API.Create for "multinode-358000" (driver="docker")
	I0224 15:05:53.622354   32699 client.go:168] LocalClient.Create starting
	I0224 15:05:53.622646   32699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
	I0224 15:05:53.622777   32699 main.go:141] libmachine: Decoding PEM data...
	I0224 15:05:53.622828   32699 main.go:141] libmachine: Parsing certificate...
	I0224 15:05:53.622978   32699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
	I0224 15:05:53.623064   32699 main.go:141] libmachine: Decoding PEM data...
	I0224 15:05:53.623082   32699 main.go:141] libmachine: Parsing certificate...
	I0224 15:05:53.624014   32699 cli_runner.go:164] Run: docker network inspect multinode-358000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 15:05:53.684011   32699 cli_runner.go:211] docker network inspect multinode-358000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 15:05:53.684109   32699 network_create.go:281] running [docker network inspect multinode-358000] to gather additional debugging logs...
	I0224 15:05:53.684127   32699 cli_runner.go:164] Run: docker network inspect multinode-358000
	W0224 15:05:53.739375   32699 cli_runner.go:211] docker network inspect multinode-358000 returned with exit code 1
	I0224 15:05:53.739405   32699 network_create.go:284] error running [docker network inspect multinode-358000]: docker network inspect multinode-358000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-358000
	I0224 15:05:53.739418   32699 network_create.go:286] output of [docker network inspect multinode-358000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-358000
	
	** /stderr **
	I0224 15:05:53.739519   32699 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 15:05:53.799037   32699 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 15:05:53.799354   32699 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00129b990}
	I0224 15:05:53.799367   32699 network_create.go:123] attempt to create docker network multinode-358000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0224 15:05:53.799437   32699 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-358000 multinode-358000
	I0224 15:05:53.890418   32699 network_create.go:107] docker network multinode-358000 192.168.58.0/24 created
	I0224 15:05:53.890457   32699 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-358000" container
	I0224 15:05:53.890580   32699 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 15:05:53.946803   32699 cli_runner.go:164] Run: docker volume create multinode-358000 --label name.minikube.sigs.k8s.io=multinode-358000 --label created_by.minikube.sigs.k8s.io=true
	I0224 15:05:54.004323   32699 oci.go:103] Successfully created a docker volume multinode-358000
	I0224 15:05:54.004453   32699 cli_runner.go:164] Run: docker run --rm --name multinode-358000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-358000 --entrypoint /usr/bin/test -v multinode-358000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 15:05:54.456252   32699 oci.go:107] Successfully prepared a docker volume multinode-358000
	I0224 15:05:54.456288   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:05:54.456302   32699 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 15:05:54.456407   32699 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-358000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 15:06:00.615283   32699 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-358000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.158620865s)
	I0224 15:06:00.615308   32699 kic.go:199] duration metric: took 6.158821 seconds to extract preloaded images to volume
	I0224 15:06:00.615424   32699 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 15:06:00.757047   32699 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-358000 --name multinode-358000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-358000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-358000 --network multinode-358000 --ip 192.168.58.2 --volume multinode-358000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 15:06:01.109529   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Running}}
	I0224 15:06:01.172749   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:01.240155   32699 cli_runner.go:164] Run: docker exec multinode-358000 stat /var/lib/dpkg/alternatives/iptables
	I0224 15:06:01.352021   32699 oci.go:144] the created container "multinode-358000" has a running status.
	I0224 15:06:01.352058   32699 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa...
	I0224 15:06:01.599598   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 15:06:01.599677   32699 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 15:06:01.704310   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:01.760772   32699 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 15:06:01.760792   32699 kic_runner.go:114] Args: [docker exec --privileged multinode-358000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 15:06:01.861668   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:01.918839   32699 machine.go:88] provisioning docker machine ...
	I0224 15:06:01.918883   32699 ubuntu.go:169] provisioning hostname "multinode-358000"
	I0224 15:06:01.918988   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:01.976468   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:01.976850   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:01.976864   32699 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-358000 && echo "multinode-358000" | sudo tee /etc/hostname
	I0224 15:06:02.120887   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-358000
	
	I0224 15:06:02.120963   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:02.178550   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:02.178931   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:02.178944   32699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-358000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-358000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-358000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:06:02.314857   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:06:02.314883   32699 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:06:02.314902   32699 ubuntu.go:177] setting up certificates
	I0224 15:06:02.314909   32699 provision.go:83] configureAuth start
	I0224 15:06:02.315000   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000
	I0224 15:06:02.371002   32699 provision.go:138] copyHostCerts
	I0224 15:06:02.371047   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:06:02.371106   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:06:02.371114   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:06:02.371235   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:06:02.371403   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:06:02.371434   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:06:02.371439   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:06:02.371505   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:06:02.371612   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:06:02.371651   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:06:02.371655   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:06:02.371717   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:06:02.371826   32699 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.multinode-358000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-358000]
	I0224 15:06:02.441506   32699 provision.go:172] copyRemoteCerts
	I0224 15:06:02.441562   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:06:02.441619   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:02.498703   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:02.594945   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 15:06:02.595046   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:06:02.612524   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 15:06:02.612584   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0224 15:06:02.629650   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 15:06:02.629727   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 15:06:02.647276   32699 provision.go:86] duration metric: configureAuth took 332.338688ms
	I0224 15:06:02.647290   32699 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:06:02.647451   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:06:02.647516   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:02.734568   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:02.735051   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:02.735072   32699 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:06:02.871688   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:06:02.871707   32699 ubuntu.go:71] root file system type: overlay
	I0224 15:06:02.871790   32699 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:06:02.871877   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:02.929413   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:02.929798   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:02.929847   32699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:06:03.072077   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:06:03.072164   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:03.128700   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:03.129050   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58094 <nil> <nil>}
	I0224 15:06:03.129063   32699 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:06:03.745599   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:06:03.070159446 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 15:06:03.745620   32699 machine.go:91] provisioned docker machine in 1.826705767s
	I0224 15:06:03.745626   32699 client.go:171] LocalClient.Create took 10.12295837s
	I0224 15:06:03.745643   32699 start.go:167] duration metric: libmachine.API.Create for "multinode-358000" took 10.123054882s
	I0224 15:06:03.745652   32699 start.go:300] post-start starting for "multinode-358000" (driver="docker")
	I0224 15:06:03.745657   32699 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:06:03.745745   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:06:03.745797   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:03.805311   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:03.902292   32699 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:06:03.905822   32699 command_runner.go:130] > NAME="Ubuntu"
	I0224 15:06:03.905837   32699 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0224 15:06:03.905842   32699 command_runner.go:130] > ID=ubuntu
	I0224 15:06:03.905849   32699 command_runner.go:130] > ID_LIKE=debian
	I0224 15:06:03.905855   32699 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0224 15:06:03.905859   32699 command_runner.go:130] > VERSION_ID="20.04"
	I0224 15:06:03.905865   32699 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0224 15:06:03.905873   32699 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0224 15:06:03.905878   32699 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0224 15:06:03.905887   32699 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0224 15:06:03.905891   32699 command_runner.go:130] > VERSION_CODENAME=focal
	I0224 15:06:03.905895   32699 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0224 15:06:03.905943   32699 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:06:03.905957   32699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:06:03.905965   32699 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:06:03.905969   32699 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:06:03.905980   32699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:06:03.906085   32699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:06:03.906259   32699 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:06:03.906265   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /etc/ssl/certs/268712.pem
	I0224 15:06:03.906453   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:06:03.913757   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:06:03.930925   32699 start.go:303] post-start completed in 185.257841ms
	I0224 15:06:03.931463   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000
	I0224 15:06:03.987950   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:06:03.988379   32699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:06:03.988438   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:04.044302   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:04.136580   32699 command_runner.go:130] > 5%!
	(MISSING)I0224 15:06:04.136658   32699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:06:04.141104   32699 command_runner.go:130] > 93G
	I0224 15:06:04.141412   32699 start.go:128] duration metric: createHost completed in 10.562952813s
	I0224 15:06:04.141426   32699 start.go:83] releasing machines lock for "multinode-358000", held for 10.563126002s
	I0224 15:06:04.141536   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000
	I0224 15:06:04.197175   32699 ssh_runner.go:195] Run: cat /version.json
	I0224 15:06:04.197187   32699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 15:06:04.197240   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:04.197262   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:04.257562   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:04.257707   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:04.400238   32699 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 15:06:04.401751   32699 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0224 15:06:04.401883   32699 ssh_runner.go:195] Run: systemctl --version
	I0224 15:06:04.406448   32699 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0224 15:06:04.406470   32699 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0224 15:06:04.406821   32699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 15:06:04.411614   32699 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0224 15:06:04.411623   32699 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0224 15:06:04.411628   32699 command_runner.go:130] > Device: a6h/166d	Inode: 2885207     Links: 1
	I0224 15:06:04.411636   32699 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 15:06:04.411646   32699 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0224 15:06:04.411650   32699 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0224 15:06:04.411654   32699 command_runner.go:130] > Change: 2023-02-24 22:41:56.862825099 +0000
	I0224 15:06:04.411658   32699 command_runner.go:130] >  Birth: -
	I0224 15:06:04.411997   32699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 15:06:04.431940   32699 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 15:06:04.432008   32699 ssh_runner.go:195] Run: which cri-dockerd
	I0224 15:06:04.435777   32699 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 15:06:04.435992   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 15:06:04.443243   32699 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 15:06:04.456172   32699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 15:06:04.470818   32699 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0224 15:06:04.470854   32699 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 15:06:04.470865   32699 start.go:485] detecting cgroup driver to use...
	I0224 15:06:04.470875   32699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:06:04.470951   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:06:04.483120   32699 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 15:06:04.483136   32699 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 15:06:04.483918   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 15:06:04.492451   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:06:04.500930   32699 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:06:04.500993   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:06:04.509362   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:06:04.517801   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:06:04.526164   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:06:04.534551   32699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:06:04.542391   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:06:04.550807   32699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:06:04.557252   32699 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 15:06:04.557968   32699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:06:04.565084   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:06:04.629317   32699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:06:04.701330   32699 start.go:485] detecting cgroup driver to use...
	I0224 15:06:04.701349   32699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:06:04.701410   32699 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:06:04.710823   32699 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0224 15:06:04.710953   32699 command_runner.go:130] > [Unit]
	I0224 15:06:04.710964   32699 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 15:06:04.710971   32699 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 15:06:04.710977   32699 command_runner.go:130] > BindsTo=containerd.service
	I0224 15:06:04.710982   32699 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0224 15:06:04.710986   32699 command_runner.go:130] > Wants=network-online.target
	I0224 15:06:04.710992   32699 command_runner.go:130] > Requires=docker.socket
	I0224 15:06:04.710995   32699 command_runner.go:130] > StartLimitBurst=3
	I0224 15:06:04.711000   32699 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 15:06:04.711009   32699 command_runner.go:130] > [Service]
	I0224 15:06:04.711013   32699 command_runner.go:130] > Type=notify
	I0224 15:06:04.711016   32699 command_runner.go:130] > Restart=on-failure
	I0224 15:06:04.711022   32699 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 15:06:04.711047   32699 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 15:06:04.711053   32699 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 15:06:04.711059   32699 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 15:06:04.711065   32699 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 15:06:04.711070   32699 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 15:06:04.711076   32699 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 15:06:04.711089   32699 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 15:06:04.711095   32699 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 15:06:04.711102   32699 command_runner.go:130] > ExecStart=
	I0224 15:06:04.711115   32699 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0224 15:06:04.711120   32699 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 15:06:04.711125   32699 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 15:06:04.711132   32699 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 15:06:04.711136   32699 command_runner.go:130] > LimitNOFILE=infinity
	I0224 15:06:04.711139   32699 command_runner.go:130] > LimitNPROC=infinity
	I0224 15:06:04.711143   32699 command_runner.go:130] > LimitCORE=infinity
	I0224 15:06:04.711148   32699 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 15:06:04.711152   32699 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 15:06:04.711156   32699 command_runner.go:130] > TasksMax=infinity
	I0224 15:06:04.711159   32699 command_runner.go:130] > TimeoutStartSec=0
	I0224 15:06:04.711164   32699 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 15:06:04.711169   32699 command_runner.go:130] > Delegate=yes
	I0224 15:06:04.711173   32699 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 15:06:04.711177   32699 command_runner.go:130] > KillMode=process
	I0224 15:06:04.711185   32699 command_runner.go:130] > [Install]
	I0224 15:06:04.711190   32699 command_runner.go:130] > WantedBy=multi-user.target
	I0224 15:06:04.711753   32699 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:06:04.711819   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:06:04.722696   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:06:04.736120   32699 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 15:06:04.736140   32699 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 15:06:04.736894   32699 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:06:04.810788   32699 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:06:04.901744   32699 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:06:04.901763   32699 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:06:04.915707   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:06:05.014497   32699 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:06:05.237783   32699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:06:05.310898   32699 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0224 15:06:05.310967   32699 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 15:06:05.385561   32699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:06:05.455106   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:06:05.523153   32699 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 15:06:05.534415   32699 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 15:06:05.534505   32699 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 15:06:05.538376   32699 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 15:06:05.538389   32699 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 15:06:05.538394   32699 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0224 15:06:05.538399   32699 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0224 15:06:05.538405   32699 command_runner.go:130] > Access: 2023-02-24 23:06:05.530159588 +0000
	I0224 15:06:05.538409   32699 command_runner.go:130] > Modify: 2023-02-24 23:06:05.530159588 +0000
	I0224 15:06:05.538415   32699 command_runner.go:130] > Change: 2023-02-24 23:06:05.531159588 +0000
	I0224 15:06:05.538426   32699 command_runner.go:130] >  Birth: -
	I0224 15:06:05.538523   32699 start.go:553] Will wait 60s for crictl version
	I0224 15:06:05.538570   32699 ssh_runner.go:195] Run: which crictl
	I0224 15:06:05.542039   32699 command_runner.go:130] > /usr/bin/crictl
	I0224 15:06:05.542186   32699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 15:06:05.641216   32699 command_runner.go:130] > Version:  0.1.0
	I0224 15:06:05.641227   32699 command_runner.go:130] > RuntimeName:  docker
	I0224 15:06:05.641231   32699 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0224 15:06:05.641236   32699 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 15:06:05.643151   32699 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 15:06:05.643224   32699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:06:05.666059   32699 command_runner.go:130] > 23.0.1
	I0224 15:06:05.668045   32699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:06:05.691085   32699 command_runner.go:130] > 23.0.1
	I0224 15:06:05.736151   32699 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 15:06:05.736357   32699 cli_runner.go:164] Run: docker exec -t multinode-358000 dig +short host.docker.internal
	I0224 15:06:05.847649   32699 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:06:05.847759   32699 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:06:05.852257   32699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:06:05.862318   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:05.919094   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:06:05.919180   32699 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:06:05.938574   32699 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 15:06:05.938594   32699 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 15:06:05.938599   32699 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 15:06:05.938606   32699 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 15:06:05.938611   32699 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 15:06:05.938616   32699 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 15:06:05.938620   32699 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 15:06:05.938626   32699 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 15:06:05.940094   32699 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 15:06:05.940108   32699 docker.go:560] Images already preloaded, skipping extraction
	I0224 15:06:05.940179   32699 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:06:05.959670   32699 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0224 15:06:05.959683   32699 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0224 15:06:05.959688   32699 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0224 15:06:05.959698   32699 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0224 15:06:05.959705   32699 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0224 15:06:05.959713   32699 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0224 15:06:05.959718   32699 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0224 15:06:05.959728   32699 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 15:06:05.961484   32699 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 15:06:05.961497   32699 cache_images.go:84] Images are preloaded, skipping loading
	I0224 15:06:05.961590   32699 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:06:05.985614   32699 command_runner.go:130] > cgroupfs
	I0224 15:06:05.987372   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:06:05.987384   32699 cni.go:136] 1 nodes found, recommending kindnet
	I0224 15:06:05.987403   32699 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:06:05.987421   32699 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-358000 NodeName:multinode-358000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:06:05.987534   32699 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-358000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:06:05.987610   32699 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-358000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:06:05.987689   32699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 15:06:05.995295   32699 command_runner.go:130] > kubeadm
	I0224 15:06:05.995306   32699 command_runner.go:130] > kubectl
	I0224 15:06:05.995311   32699 command_runner.go:130] > kubelet
	I0224 15:06:05.996200   32699 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:06:05.996290   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 15:06:06.004706   32699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0224 15:06:06.017726   32699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:06:06.030758   32699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0224 15:06:06.043886   32699 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:06:06.047647   32699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:06:06.057556   32699 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000 for IP: 192.168.58.2
	I0224 15:06:06.057575   32699 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.057764   32699 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:06:06.057832   32699 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:06:06.057876   32699 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key
	I0224 15:06:06.057890   32699 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt with IP's: []
	I0224 15:06:06.218063   32699 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt ...
	I0224 15:06:06.218072   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt: {Name:mkf9646423d8a5efec8e5fc88a77aa92f40ab15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.218366   32699 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key ...
	I0224 15:06:06.218373   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key: {Name:mk2a0c1a353142ca931c8656aa00ef7eeee445a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.218568   32699 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key.cee25041
	I0224 15:06:06.218584   32699 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 15:06:06.255271   32699 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt.cee25041 ...
	I0224 15:06:06.255280   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt.cee25041: {Name:mkfa6679948f9a1b5bdbbd6c85f67c8f1bb24f07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.255539   32699 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key.cee25041 ...
	I0224 15:06:06.255550   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key.cee25041: {Name:mkd0d79de3148e3834748981d65e43b5337ab740 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.255768   32699 certs.go:333] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt
	I0224 15:06:06.255975   32699 certs.go:337] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key
	I0224 15:06:06.256184   32699 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key
	I0224 15:06:06.256197   32699 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt with IP's: []
	I0224 15:06:06.557247   32699 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt ...
	I0224 15:06:06.557266   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt: {Name:mk6c0746cc6f68aa1e42c1925a73aab63483dd88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.557559   32699 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key ...
	I0224 15:06:06.557578   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key: {Name:mk07b966efb39d3ac2ad033ba22960f8ad80f23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:06.557813   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0224 15:06:06.557846   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0224 15:06:06.557894   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0224 15:06:06.557932   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0224 15:06:06.557952   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 15:06:06.557974   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 15:06:06.557992   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 15:06:06.558010   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 15:06:06.558105   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:06:06.558154   32699 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:06:06.558165   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:06:06.558202   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:06:06.558235   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:06:06.558267   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:06:06.558334   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:06:06.558368   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.558391   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.558409   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem -> /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.558906   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 15:06:06.577898   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 15:06:06.594905   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 15:06:06.611993   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 15:06:06.629312   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:06:06.646256   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:06:06.663331   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:06:06.680653   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:06:06.698206   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:06:06.715800   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:06:06.733019   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:06:06.750170   32699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 15:06:06.763103   32699 ssh_runner.go:195] Run: openssl version
	I0224 15:06:06.768175   32699 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0224 15:06:06.768504   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:06:06.776582   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.780496   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.780635   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.780676   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:06:06.785834   32699 command_runner.go:130] > 3ec20f2e
	I0224 15:06:06.786247   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:06:06.794393   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:06:06.802362   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.806286   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.806318   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.806367   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:06:06.811484   32699 command_runner.go:130] > b5213941
	I0224 15:06:06.811883   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:06:06.820100   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:06:06.828204   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.832234   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.832390   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.832436   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:06:06.837521   32699 command_runner.go:130] > 51391683
	I0224 15:06:06.837773   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:06:06.845990   32699 kubeadm.go:401] StartCluster: {Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:06:06.846099   32699 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:06:06.865390   32699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 15:06:06.873234   32699 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0224 15:06:06.873245   32699 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0224 15:06:06.873254   32699 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0224 15:06:06.873313   32699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:06:06.880903   32699 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:06:06.880953   32699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:06:06.888270   32699 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0224 15:06:06.888283   32699 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0224 15:06:06.888295   32699 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0224 15:06:06.888308   32699 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:06:06.888339   32699 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:06:06.888363   32699 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:06:06.937859   32699 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0224 15:06:06.937870   32699 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0224 15:06:06.938198   32699 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:06:06.938210   32699 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 15:06:07.046825   32699 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:06:07.046862   32699 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:06:07.047006   32699 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:06:07.047014   32699 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:06:07.047128   32699 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:06:07.047135   32699 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:06:07.179277   32699 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:06:07.179291   32699 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:06:07.221486   32699 out.go:204]   - Generating certificates and keys ...
	I0224 15:06:07.221624   32699 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0224 15:06:07.221650   32699 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:06:07.221778   32699 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0224 15:06:07.221785   32699 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:06:07.309934   32699 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 15:06:07.309947   32699 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 15:06:07.498825   32699 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 15:06:07.498842   32699 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0224 15:06:07.761954   32699 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 15:06:07.761962   32699 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0224 15:06:07.884540   32699 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 15:06:07.884595   32699 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0224 15:06:08.148035   32699 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 15:06:08.148051   32699 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0224 15:06:08.148147   32699 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-358000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 15:06:08.148152   32699 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-358000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 15:06:08.255213   32699 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 15:06:08.255226   32699 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0224 15:06:08.255438   32699 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-358000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 15:06:08.255443   32699 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-358000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0224 15:06:08.437384   32699 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 15:06:08.437399   32699 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 15:06:08.545210   32699 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 15:06:08.545220   32699 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 15:06:08.589595   32699 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 15:06:08.589599   32699 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0224 15:06:08.589653   32699 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:06:08.589660   32699 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:06:08.925094   32699 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:06:08.925119   32699 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:06:09.012936   32699 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:06:09.012968   32699 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:06:09.243341   32699 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:06:09.243356   32699 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:06:09.383002   32699 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:06:09.383017   32699 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:06:09.393734   32699 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:06:09.393740   32699 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:06:09.394431   32699 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:06:09.394439   32699 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:06:09.394474   32699 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 15:06:09.394483   32699 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 15:06:09.463550   32699 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:06:09.463554   32699 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:06:09.484938   32699 out.go:204]   - Booting up control plane ...
	I0224 15:06:09.485014   32699 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:06:09.485028   32699 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:06:09.485100   32699 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:06:09.485107   32699 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:06:09.485167   32699 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:06:09.485179   32699 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:06:09.485248   32699 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:06:09.485255   32699 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:06:09.485396   32699 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:06:09.485398   32699 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:06:18.970599   32699 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502052 seconds
	I0224 15:06:18.970617   32699 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.502052 seconds
	I0224 15:06:18.970735   32699 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 15:06:18.970745   32699 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 15:06:18.978671   32699 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 15:06:18.978699   32699 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 15:06:19.494147   32699 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 15:06:19.494183   32699 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0224 15:06:19.494397   32699 kubeadm.go:322] [mark-control-plane] Marking the node multinode-358000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 15:06:19.494409   32699 command_runner.go:130] > [mark-control-plane] Marking the node multinode-358000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 15:06:20.002931   32699 kubeadm.go:322] [bootstrap-token] Using token: 5c43ss.8mz735kcmuqfeuba
	I0224 15:06:20.002945   32699 command_runner.go:130] > [bootstrap-token] Using token: 5c43ss.8mz735kcmuqfeuba
	I0224 15:06:20.025718   32699 out.go:204]   - Configuring RBAC rules ...
	I0224 15:06:20.025833   32699 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 15:06:20.025845   32699 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 15:06:20.141758   32699 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 15:06:20.141774   32699 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 15:06:20.147058   32699 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 15:06:20.147068   32699 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 15:06:20.149662   32699 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 15:06:20.149668   32699 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 15:06:20.151979   32699 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 15:06:20.151988   32699 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 15:06:20.154008   32699 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 15:06:20.154025   32699 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 15:06:20.162198   32699 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 15:06:20.162204   32699 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 15:06:20.319035   32699 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0224 15:06:20.319039   32699 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0224 15:06:20.559257   32699 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0224 15:06:20.559289   32699 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0224 15:06:20.560060   32699 kubeadm.go:322] 
	I0224 15:06:20.560138   32699 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0224 15:06:20.560165   32699 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0224 15:06:20.560182   32699 kubeadm.go:322] 
	I0224 15:06:20.560269   32699 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0224 15:06:20.560285   32699 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0224 15:06:20.560300   32699 kubeadm.go:322] 
	I0224 15:06:20.560334   32699 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0224 15:06:20.560350   32699 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0224 15:06:20.560444   32699 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 15:06:20.560450   32699 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 15:06:20.560529   32699 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 15:06:20.560539   32699 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 15:06:20.560544   32699 kubeadm.go:322] 
	I0224 15:06:20.560587   32699 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0224 15:06:20.560599   32699 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0224 15:06:20.560617   32699 kubeadm.go:322] 
	I0224 15:06:20.560682   32699 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 15:06:20.560695   32699 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 15:06:20.560713   32699 kubeadm.go:322] 
	I0224 15:06:20.560790   32699 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0224 15:06:20.560799   32699 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0224 15:06:20.560879   32699 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 15:06:20.560889   32699 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 15:06:20.560988   32699 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 15:06:20.560996   32699 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 15:06:20.561002   32699 kubeadm.go:322] 
	I0224 15:06:20.561104   32699 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 15:06:20.561118   32699 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0224 15:06:20.561181   32699 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0224 15:06:20.561187   32699 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0224 15:06:20.561192   32699 kubeadm.go:322] 
	I0224 15:06:20.561276   32699 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5c43ss.8mz735kcmuqfeuba \
	I0224 15:06:20.561289   32699 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 5c43ss.8mz735kcmuqfeuba \
	I0224 15:06:20.561386   32699 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e \
	I0224 15:06:20.561395   32699 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e \
	I0224 15:06:20.561419   32699 kubeadm.go:322] 	--control-plane 
	I0224 15:06:20.561424   32699 command_runner.go:130] > 	--control-plane 
	I0224 15:06:20.561430   32699 kubeadm.go:322] 
	I0224 15:06:20.561492   32699 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0224 15:06:20.561496   32699 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0224 15:06:20.561498   32699 kubeadm.go:322] 
	I0224 15:06:20.561613   32699 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5c43ss.8mz735kcmuqfeuba \
	I0224 15:06:20.561620   32699 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5c43ss.8mz735kcmuqfeuba \
	I0224 15:06:20.561693   32699 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e 
	I0224 15:06:20.561697   32699 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e 
	I0224 15:06:20.566443   32699 kubeadm.go:322] W0224 23:06:06.929709    1301 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 15:06:20.566460   32699 command_runner.go:130] ! W0224 23:06:06.929709    1301 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 15:06:20.566627   32699 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0224 15:06:20.566638   32699 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0224 15:06:20.566768   32699 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:06:20.566778   32699 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:06:20.566802   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:06:20.566817   32699 cni.go:136] 1 nodes found, recommending kindnet
	I0224 15:06:20.606381   32699 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0224 15:06:20.628536   32699 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 15:06:20.656094   32699 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 15:06:20.656117   32699 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0224 15:06:20.656126   32699 command_runner.go:130] > Device: a6h/166d	Inode: 2757559     Links: 1
	I0224 15:06:20.656135   32699 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 15:06:20.656144   32699 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0224 15:06:20.656151   32699 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0224 15:06:20.656172   32699 command_runner.go:130] > Change: 2023-02-24 22:41:56.035825051 +0000
	I0224 15:06:20.656177   32699 command_runner.go:130] >  Birth: -
	I0224 15:06:20.656261   32699 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 15:06:20.656270   32699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 15:06:20.670756   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 15:06:21.280785   32699 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0224 15:06:21.285685   32699 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0224 15:06:21.291793   32699 command_runner.go:130] > serviceaccount/kindnet created
	I0224 15:06:21.299522   32699 command_runner.go:130] > daemonset.apps/kindnet created
	I0224 15:06:21.305563   32699 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 15:06:21.305650   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:21.305648   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=08976559d74fb9c2654733dc21cb8f9d9ec24374 minikube.k8s.io/name=multinode-358000 minikube.k8s.io/updated_at=2023_02_24T15_06_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:21.313911   32699 command_runner.go:130] > -16
	I0224 15:06:21.313936   32699 ops.go:34] apiserver oom_adj: -16
	I0224 15:06:21.388520   32699 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0224 15:06:21.411045   32699 command_runner.go:130] > node/multinode-358000 labeled
	I0224 15:06:21.411099   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:21.505089   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:22.006407   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:22.066690   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:22.507343   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:22.569646   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:23.006632   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:23.071130   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:23.506590   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:23.569176   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:24.006544   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:24.071511   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:24.506502   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:24.570330   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:25.007555   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:25.073387   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:25.506555   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:25.573121   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:26.006564   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:26.072268   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:26.507511   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:26.571578   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:27.007397   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:27.071721   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:27.506676   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:27.571090   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:28.006648   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:28.070884   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:28.507586   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:28.568767   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:29.006654   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:29.067325   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:29.506706   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:29.574296   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:30.006789   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:30.072674   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:30.507377   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:30.570895   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:31.006779   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:31.073349   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:31.507721   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:31.573940   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:32.006759   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:32.073746   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:32.507329   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:32.573734   32699 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0224 15:06:33.007394   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:06:33.069741   32699 command_runner.go:130] > NAME      SECRETS   AGE
	I0224 15:06:33.069753   32699 command_runner.go:130] > default   0         1s
	I0224 15:06:33.073484   32699 kubeadm.go:1073] duration metric: took 11.767553662s to wait for elevateKubeSystemPrivileges.
	I0224 15:06:33.073497   32699 kubeadm.go:403] StartCluster complete in 26.226728728s
	I0224 15:06:33.073518   32699 settings.go:142] acquiring lock: {Name:mk61f6764f7c264302b01ffc8eee0ee0f10d20c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:33.073608   32699 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:06:33.074109   32699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/kubeconfig: {Name:mk1182fefc6ba3b7ea2d2356c47127703005a4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:06:33.074361   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 15:06:33.074395   32699 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0224 15:06:33.074455   32699 addons.go:65] Setting storage-provisioner=true in profile "multinode-358000"
	I0224 15:06:33.074463   32699 addons.go:65] Setting default-storageclass=true in profile "multinode-358000"
	I0224 15:06:33.074467   32699 addons.go:227] Setting addon storage-provisioner=true in "multinode-358000"
	I0224 15:06:33.074487   32699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-358000"
	I0224 15:06:33.074515   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:06:33.074531   32699 host.go:66] Checking if "multinode-358000" exists ...
	I0224 15:06:33.074757   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:33.074809   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:06:33.074857   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:33.075814   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:06:33.078838   32699 cert_rotation.go:137] Starting client certificate rotation controller
	I0224 15:06:33.079199   32699 round_trippers.go:463] GET https://127.0.0.1:58093/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 15:06:33.079207   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:33.079216   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:33.079223   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:33.088272   32699 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0224 15:06:33.088287   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:33.088293   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:33.088298   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:33.088303   32699 round_trippers.go:580]     Content-Length: 291
	I0224 15:06:33.088307   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:33 GMT
	I0224 15:06:33.088311   32699 round_trippers.go:580]     Audit-Id: 59f5e50e-3e7b-44e1-910b-6dd59f461b2c
	I0224 15:06:33.088316   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:33.088320   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:33.088342   32699 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"299","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0224 15:06:33.088656   32699 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"299","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0224 15:06:33.088684   32699 round_trippers.go:463] PUT https://127.0.0.1:58093/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 15:06:33.088688   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:33.088694   32699 round_trippers.go:473]     Content-Type: application/json
	I0224 15:06:33.088700   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:33.088713   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:33.094819   32699 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0224 15:06:33.094834   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:33.094840   32699 round_trippers.go:580]     Audit-Id: 1e89de0f-c6c6-49ba-9ec3-e207c79c7611
	I0224 15:06:33.094845   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:33.094849   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:33.094855   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:33.094862   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:33.094867   32699 round_trippers.go:580]     Content-Length: 291
	I0224 15:06:33.094872   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:33 GMT
	I0224 15:06:33.094890   32699 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"311","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0224 15:06:33.141535   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:06:33.163361   32699 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 15:06:33.163644   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:06:33.200673   32699 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 15:06:33.200693   32699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 15:06:33.200810   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:33.201017   32699 round_trippers.go:463] GET https://127.0.0.1:58093/apis/storage.k8s.io/v1/storageclasses
	I0224 15:06:33.201035   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:33.201048   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:33.201060   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:33.206583   32699 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0224 15:06:33.206607   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:33.206613   32699 round_trippers.go:580]     Audit-Id: fcdbcbdd-f0e1-4bd7-9082-3e9ae0db8220
	I0224 15:06:33.206624   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:33.206630   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:33.206636   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:33.206640   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:33.206645   32699 round_trippers.go:580]     Content-Length: 109
	I0224 15:06:33.206649   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:33 GMT
	I0224 15:06:33.206677   32699 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"316"},"items":[]}
	I0224 15:06:33.207035   32699 addons.go:227] Setting addon default-storageclass=true in "multinode-358000"
	I0224 15:06:33.207057   32699 host.go:66] Checking if "multinode-358000" exists ...
	I0224 15:06:33.207442   32699 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:06:33.210737   32699 command_runner.go:130] > apiVersion: v1
	I0224 15:06:33.210777   32699 command_runner.go:130] > data:
	I0224 15:06:33.210785   32699 command_runner.go:130] >   Corefile: |
	I0224 15:06:33.210791   32699 command_runner.go:130] >     .:53 {
	I0224 15:06:33.210797   32699 command_runner.go:130] >         errors
	I0224 15:06:33.210804   32699 command_runner.go:130] >         health {
	I0224 15:06:33.210816   32699 command_runner.go:130] >            lameduck 5s
	I0224 15:06:33.210823   32699 command_runner.go:130] >         }
	I0224 15:06:33.210831   32699 command_runner.go:130] >         ready
	I0224 15:06:33.210848   32699 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0224 15:06:33.210856   32699 command_runner.go:130] >            pods insecure
	I0224 15:06:33.210868   32699 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0224 15:06:33.210881   32699 command_runner.go:130] >            ttl 30
	I0224 15:06:33.210888   32699 command_runner.go:130] >         }
	I0224 15:06:33.210896   32699 command_runner.go:130] >         prometheus :9153
	I0224 15:06:33.210903   32699 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0224 15:06:33.210915   32699 command_runner.go:130] >            max_concurrent 1000
	I0224 15:06:33.210922   32699 command_runner.go:130] >         }
	I0224 15:06:33.210931   32699 command_runner.go:130] >         cache 30
	I0224 15:06:33.210937   32699 command_runner.go:130] >         loop
	I0224 15:06:33.210944   32699 command_runner.go:130] >         reload
	I0224 15:06:33.210950   32699 command_runner.go:130] >         loadbalance
	I0224 15:06:33.210956   32699 command_runner.go:130] >     }
	I0224 15:06:33.210962   32699 command_runner.go:130] > kind: ConfigMap
	I0224 15:06:33.210969   32699 command_runner.go:130] > metadata:
	I0224 15:06:33.210985   32699 command_runner.go:130] >   creationTimestamp: "2023-02-24T23:06:20Z"
	I0224 15:06:33.210992   32699 command_runner.go:130] >   name: coredns
	I0224 15:06:33.210999   32699 command_runner.go:130] >   namespace: kube-system
	I0224 15:06:33.211004   32699 command_runner.go:130] >   resourceVersion: "227"
	I0224 15:06:33.211014   32699 command_runner.go:130] >   uid: 82f87881-3653-4247-95f2-0ea74ee5b71c
	I0224 15:06:33.211264   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 15:06:33.276686   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:33.280279   32699 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 15:06:33.280298   32699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 15:06:33.280390   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:33.342167   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:06:33.513398   32699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 15:06:33.567681   32699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 15:06:33.597274   32699 round_trippers.go:463] GET https://127.0.0.1:58093/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 15:06:33.597289   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:33.597296   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:33.597302   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:33.654729   32699 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0224 15:06:33.654755   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:33.654767   32699 round_trippers.go:580]     Audit-Id: 5d677796-dfd3-4af6-95e9-28f734425502
	I0224 15:06:33.654781   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:33.654792   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:33.654813   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:33.654833   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:33.654853   32699 round_trippers.go:580]     Content-Length: 291
	I0224 15:06:33.654869   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:33 GMT
	I0224 15:06:33.654925   32699 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"352","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 15:06:33.655027   32699 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-358000" context rescaled to 1 replicas
	I0224 15:06:33.655064   32699 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 15:06:33.677515   32699 out.go:177] * Verifying Kubernetes components...
	I0224 15:06:33.697324   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:06:33.764331   32699 command_runner.go:130] > configmap/coredns replaced
	I0224 15:06:33.764366   32699 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0224 15:06:33.972253   32699 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0224 15:06:33.972277   32699 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0224 15:06:33.972293   32699 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 15:06:33.972305   32699 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0224 15:06:33.972312   32699 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0224 15:06:33.972318   32699 command_runner.go:130] > pod/storage-provisioner created
	I0224 15:06:33.980295   32699 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0224 15:06:33.980435   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:06:34.002751   32699 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0224 15:06:34.038875   32699 addons.go:492] enable addons completed in 964.411003ms: enabled=[storage-provisioner default-storageclass]
	I0224 15:06:34.063518   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:06:34.063795   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:06:34.064068   32699 node_ready.go:35] waiting up to 6m0s for node "multinode-358000" to be "Ready" ...
	I0224 15:06:34.064130   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:34.064139   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.064148   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.064154   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.067142   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:34.067157   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.067165   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.067170   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.067175   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.067183   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.067188   32699 round_trippers.go:580]     Audit-Id: f6e60995-4f9c-4d62-8d7a-661da686d1f9
	I0224 15:06:34.067208   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.067288   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:34.067718   32699 node_ready.go:49] node "multinode-358000" has status "Ready":"True"
	I0224 15:06:34.067728   32699 node_ready.go:38] duration metric: took 3.645908ms waiting for node "multinode-358000" to be "Ready" ...
	I0224 15:06:34.067737   32699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:06:34.067785   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:34.067790   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.067799   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.067810   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.071884   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:34.071910   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.071919   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.071925   32699 round_trippers.go:580]     Audit-Id: ffc6bee6-03f1-40ef-bf7c-4f88c340bc75
	I0224 15:06:34.071929   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.071934   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.071939   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.071944   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.073208   32699 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"364"},"items":[{"metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 60468 chars]
	I0224 15:06:34.076341   32699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-qfqth" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:34.076397   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:34.076403   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.076410   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.076431   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.079510   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:34.079522   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.079528   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.079533   32699 round_trippers.go:580]     Audit-Id: e39fe1fa-f969-4afa-a589-74a87a5ece31
	I0224 15:06:34.079537   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.079542   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.079548   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.079553   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.079614   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:34.079907   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:34.079914   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.079920   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.079951   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.082417   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:34.082432   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.082438   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.082443   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.082453   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.082462   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.082467   32699 round_trippers.go:580]     Audit-Id: ae2f2535-d29f-4fd8-a253-4ffb7bb3f078
	I0224 15:06:34.082472   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.082600   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:34.583877   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:34.583891   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.583897   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.583902   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.588173   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:34.588187   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.588193   32699 round_trippers.go:580]     Audit-Id: dd93584f-cf02-415f-be96-7eb3d1617e1d
	I0224 15:06:34.588201   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.588206   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.588211   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.588217   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.588222   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.588277   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:34.588597   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:34.588604   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:34.588610   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:34.588615   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:34.591422   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:34.591438   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:34.591446   32699 round_trippers.go:580]     Audit-Id: a4c2783d-b424-4fbc-9e4b-424ee77985e3
	I0224 15:06:34.591453   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:34.591460   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:34.591467   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:34.591473   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:34.591482   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:34 GMT
	I0224 15:06:34.591599   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:35.084237   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:35.084257   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:35.084266   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:35.084272   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:35.086632   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:35.086647   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:35.086658   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:35 GMT
	I0224 15:06:35.086671   32699 round_trippers.go:580]     Audit-Id: c33e1d21-7927-4507-8ee6-c9a7fe5f1f18
	I0224 15:06:35.086678   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:35.086688   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:35.086694   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:35.086699   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:35.086761   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:35.087047   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:35.087054   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:35.087060   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:35.087066   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:35.107261   32699 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0224 15:06:35.107284   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:35.107290   32699 round_trippers.go:580]     Audit-Id: 5a4f5900-bafb-4535-a25d-1ff9a3d1ff53
	I0224 15:06:35.107295   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:35.107300   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:35.107304   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:35.107309   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:35.107317   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:35 GMT
	I0224 15:06:35.107407   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:35.583099   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:35.583118   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:35.583130   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:35.583140   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:35.586729   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:35.586739   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:35.586747   32699 round_trippers.go:580]     Audit-Id: f908a10e-08e1-4175-9df7-836d89c9e028
	I0224 15:06:35.586754   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:35.586759   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:35.586764   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:35.586769   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:35.586774   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:35 GMT
	I0224 15:06:35.586829   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:35.587118   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:35.587125   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:35.587131   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:35.587136   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:35.589668   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:35.589678   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:35.589684   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:35 GMT
	I0224 15:06:35.589689   32699 round_trippers.go:580]     Audit-Id: d1c142fe-bffa-4f98-9a10-535d936f9352
	I0224 15:06:35.589694   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:35.589699   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:35.589707   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:35.589713   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:35.589765   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:36.083190   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:36.083215   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:36.083227   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:36.083237   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:36.087044   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:36.087056   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:36.087062   32699 round_trippers.go:580]     Audit-Id: 995980ef-ebcc-4089-aa03-821777601be5
	I0224 15:06:36.087066   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:36.087071   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:36.087076   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:36.087081   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:36.087087   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:36 GMT
	I0224 15:06:36.087350   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:36.087621   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:36.087627   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:36.087633   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:36.087639   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:36.089942   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:36.089952   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:36.089957   32699 round_trippers.go:580]     Audit-Id: 36532142-5fe2-40a7-95af-2ee5ca2e58da
	I0224 15:06:36.089962   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:36.089971   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:36.089978   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:36.089987   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:36.089993   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:36 GMT
	I0224 15:06:36.090051   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:36.090230   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:36.583237   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:36.583257   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:36.583269   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:36.583279   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:36.587672   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:36.587682   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:36.587687   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:36.587692   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:36.587697   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:36.587702   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:36 GMT
	I0224 15:06:36.587707   32699 round_trippers.go:580]     Audit-Id: 7fe00f9e-914d-4514-8f45-48bdc4d07b92
	I0224 15:06:36.587712   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:36.587766   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"334","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0224 15:06:36.588034   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:36.588043   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:36.588048   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:36.588054   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:36.590204   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:36.590214   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:36.590220   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:36.590226   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:36.590232   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:36.590236   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:36 GMT
	I0224 15:06:36.590242   32699 round_trippers.go:580]     Audit-Id: 78bbddcc-de2f-4ace-9f55-99476266d294
	I0224 15:06:36.590246   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:36.590304   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:37.083042   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:37.083057   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:37.083065   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:37.083070   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:37.086096   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:37.086119   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:37.086130   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:37 GMT
	I0224 15:06:37.086139   32699 round_trippers.go:580]     Audit-Id: e63c6fe5-0042-4315-af87-4c2ad096f9d7
	I0224 15:06:37.086147   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:37.086154   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:37.086162   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:37.086169   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:37.087574   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:37.087974   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:37.087984   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:37.088003   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:37.088018   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:37.092371   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:37.092389   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:37.092403   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:37.092416   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:37.092429   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:37.092438   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:37.092447   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:37 GMT
	I0224 15:06:37.092455   32699 round_trippers.go:580]     Audit-Id: 7e095165-f88d-4681-a7de-bc9251f69917
	I0224 15:06:37.093143   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:37.583394   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:37.583407   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:37.583417   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:37.583423   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:37.585916   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:37.585930   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:37.585937   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:37.585942   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:37.585951   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:37.585957   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:37.585962   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:37 GMT
	I0224 15:06:37.585968   32699 round_trippers.go:580]     Audit-Id: 47028fd6-67b4-4d83-ac7f-e713c02c8f2a
	I0224 15:06:37.586034   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:37.586331   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:37.586338   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:37.586344   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:37.586349   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:37.588696   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:37.588708   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:37.588714   32699 round_trippers.go:580]     Audit-Id: 3b897d02-5e1d-4448-9565-e9a30c8f2965
	I0224 15:06:37.588718   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:37.588723   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:37.588727   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:37.588732   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:37.588737   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:37 GMT
	I0224 15:06:37.588808   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:38.083132   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:38.083147   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:38.083154   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:38.083159   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:38.085732   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:38.085749   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:38.085755   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:38.085760   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:38.085765   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:38.085770   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:38 GMT
	I0224 15:06:38.085775   32699 round_trippers.go:580]     Audit-Id: c0eaeffe-dec9-40b5-a8cb-eebeb3ccdb7f
	I0224 15:06:38.085781   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:38.086035   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:38.086327   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:38.086334   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:38.086341   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:38.086346   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:38.088401   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:38.088412   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:38.088423   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:38.088434   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:38.088440   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:38.088445   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:38 GMT
	I0224 15:06:38.088450   32699 round_trippers.go:580]     Audit-Id: a786a4f3-3e4c-4bca-a8b7-5b901c560b67
	I0224 15:06:38.088456   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:38.088537   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:38.583342   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:38.583361   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:38.583374   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:38.583384   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:38.587349   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:38.587363   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:38.587373   32699 round_trippers.go:580]     Audit-Id: bd3d2be4-828c-4d38-be39-239d29fd23be
	I0224 15:06:38.587380   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:38.587387   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:38.587394   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:38.587400   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:38.587407   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:38 GMT
	I0224 15:06:38.587534   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:38.587849   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:38.587857   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:38.587863   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:38.587884   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:38.590026   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:38.590035   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:38.590040   32699 round_trippers.go:580]     Audit-Id: 848a1ef4-3deb-4b3c-a972-d4968d44a0e1
	I0224 15:06:38.590045   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:38.590050   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:38.590058   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:38.590064   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:38.590069   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:38 GMT
	I0224 15:06:38.590208   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:38.590397   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:39.084507   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:39.084531   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:39.084595   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:39.084608   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:39.088901   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:39.088913   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:39.088920   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:39.088928   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:39 GMT
	I0224 15:06:39.088933   32699 round_trippers.go:580]     Audit-Id: 3503d2b4-9b4e-4c2e-ad85-ba95d6a785fa
	I0224 15:06:39.088938   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:39.088943   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:39.088948   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:39.089009   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:39.089290   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:39.089296   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:39.089302   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:39.089308   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:39.091262   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:39.091273   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:39.091278   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:39.091283   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:39.091288   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:39.091293   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:39.091300   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:39 GMT
	I0224 15:06:39.091304   32699 round_trippers.go:580]     Audit-Id: 03c9d938-e319-48aa-922c-6821f72a3e73
	I0224 15:06:39.091364   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:39.583511   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:39.583532   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:39.583545   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:39.583555   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:39.587519   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:39.587535   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:39.587543   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:39 GMT
	I0224 15:06:39.587549   32699 round_trippers.go:580]     Audit-Id: 012bb995-9d6d-41aa-8061-6e5b78ae9aa4
	I0224 15:06:39.587555   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:39.587562   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:39.587569   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:39.587576   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:39.587786   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:39.588131   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:39.588137   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:39.588143   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:39.588149   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:39.590493   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:39.590501   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:39.590507   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:39.590512   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:39.590517   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:39.590522   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:39.590527   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:39 GMT
	I0224 15:06:39.590531   32699 round_trippers.go:580]     Audit-Id: df736b49-434c-4d6e-8fac-ae8d8d4f96eb
	I0224 15:06:39.590584   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:40.083246   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:40.083259   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:40.083265   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:40.083270   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:40.086187   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:40.086204   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:40.086212   32699 round_trippers.go:580]     Audit-Id: 1b486a99-1510-4cbc-9da3-d9d93f190720
	I0224 15:06:40.086217   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:40.086222   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:40.086227   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:40.086232   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:40.086237   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:40 GMT
	I0224 15:06:40.086305   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:40.086617   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:40.086624   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:40.086630   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:40.086639   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:40.088643   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:40.088653   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:40.088659   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:40.088664   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:40.088672   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:40.088678   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:40 GMT
	I0224 15:06:40.088683   32699 round_trippers.go:580]     Audit-Id: fceee19d-6f69-43c3-a51f-359506400691
	I0224 15:06:40.088687   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:40.088971   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:40.583198   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:40.583211   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:40.583217   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:40.583222   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:40.585954   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:40.585968   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:40.585976   32699 round_trippers.go:580]     Audit-Id: 71c8a523-6605-4a45-a659-e1cdbeaf8b25
	I0224 15:06:40.585987   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:40.585998   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:40.586005   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:40.586017   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:40.586028   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:40 GMT
	I0224 15:06:40.586209   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:40.586579   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:40.586587   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:40.586593   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:40.586600   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:40.588824   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:40.588834   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:40.588841   32699 round_trippers.go:580]     Audit-Id: 531b4b6c-c7e8-470e-95a2-45f2bf1401b1
	I0224 15:06:40.588853   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:40.588859   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:40.588863   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:40.588869   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:40.588875   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:40 GMT
	I0224 15:06:40.589309   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"300","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:41.083194   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:41.083211   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:41.083217   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:41.083223   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:41.086091   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:41.086103   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:41.086113   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:41 GMT
	I0224 15:06:41.086122   32699 round_trippers.go:580]     Audit-Id: dbfc62ef-9936-4639-bf0d-cdb0e0062d8d
	I0224 15:06:41.086127   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:41.086132   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:41.086137   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:41.086142   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:41.086680   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:41.087097   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:41.087105   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:41.087112   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:41.087117   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:41.089810   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:41.089819   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:41.089824   32699 round_trippers.go:580]     Audit-Id: cff029cc-901c-4a45-b92f-c3fe303240d2
	I0224 15:06:41.089829   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:41.089834   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:41.089841   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:41.089847   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:41.089851   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:41 GMT
	I0224 15:06:41.089917   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:41.090106   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:41.583314   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:41.583333   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:41.583342   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:41.583351   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:41.586067   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:41.586086   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:41.586094   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:41.586100   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:41.586106   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:41 GMT
	I0224 15:06:41.586112   32699 round_trippers.go:580]     Audit-Id: b8ebdda6-13d5-49f7-97c7-e0144172429a
	I0224 15:06:41.586117   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:41.586122   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:41.586194   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:41.586531   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:41.586540   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:41.586547   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:41.586554   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:41.588652   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:41.588663   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:41.588668   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:41.588674   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:41.588680   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:41 GMT
	I0224 15:06:41.588687   32699 round_trippers.go:580]     Audit-Id: d2686503-c521-4a77-a9a7-5d65135f3900
	I0224 15:06:41.588692   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:41.588697   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:41.588776   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:42.083218   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:42.083233   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:42.083240   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:42.083245   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:42.088626   32699 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0224 15:06:42.088640   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:42.088646   32699 round_trippers.go:580]     Audit-Id: fb4dd226-a63b-4db7-818b-4ed7876118a1
	I0224 15:06:42.088654   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:42.088666   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:42.088671   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:42.088676   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:42.088682   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:42 GMT
	I0224 15:06:42.088759   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:42.089048   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:42.089054   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:42.089060   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:42.089065   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:42.091616   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:42.091634   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:42.091640   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:42 GMT
	I0224 15:06:42.091647   32699 round_trippers.go:580]     Audit-Id: 867da644-0d6b-48c1-8028-0a7f9249187f
	I0224 15:06:42.091654   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:42.091662   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:42.091670   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:42.091678   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:42.091987   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:42.583253   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:42.583266   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:42.583273   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:42.583278   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:42.586030   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:42.586041   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:42.586048   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:42.586055   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:42.586062   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:42.586069   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:42.586078   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:42 GMT
	I0224 15:06:42.586083   32699 round_trippers.go:580]     Audit-Id: 0f84a135-b43f-4f06-a3cc-a92e964c0f45
	I0224 15:06:42.586151   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:42.586442   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:42.586448   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:42.586454   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:42.586463   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:42.588375   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:42.588388   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:42.588395   32699 round_trippers.go:580]     Audit-Id: 302ddd83-a06d-43ad-b13e-1d536b3f3ac9
	I0224 15:06:42.588402   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:42.588413   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:42.588419   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:42.588425   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:42.588433   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:42 GMT
	I0224 15:06:42.588701   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:43.083596   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:43.083609   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:43.083616   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:43.083622   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:43.086333   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:43.086346   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:43.086354   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:43 GMT
	I0224 15:06:43.086361   32699 round_trippers.go:580]     Audit-Id: 6717d740-6401-445c-b19e-784d9e2fa204
	I0224 15:06:43.086368   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:43.086381   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:43.086425   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:43.086437   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:43.086567   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:43.086849   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:43.086855   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:43.086861   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:43.086867   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:43.089113   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:43.089122   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:43.089128   32699 round_trippers.go:580]     Audit-Id: a75ba1ec-fb2f-4629-8cdf-df16ad47ffbf
	I0224 15:06:43.089133   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:43.089138   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:43.089142   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:43.089148   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:43.089152   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:43 GMT
	I0224 15:06:43.089209   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:43.583432   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:43.583445   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:43.583452   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:43.583457   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:43.586015   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:43.586029   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:43.586035   32699 round_trippers.go:580]     Audit-Id: 14f80c86-f977-4a48-8fe9-de4353e53d5f
	I0224 15:06:43.586041   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:43.586047   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:43.586051   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:43.586056   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:43.586062   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:43 GMT
	I0224 15:06:43.586137   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:43.586471   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:43.586483   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:43.586507   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:43.586516   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:43.588983   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:43.588994   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:43.588999   32699 round_trippers.go:580]     Audit-Id: 451121de-8332-40bf-81d7-11f0982e5ee4
	I0224 15:06:43.589007   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:43.589012   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:43.589018   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:43.589023   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:43.589071   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:43 GMT
	I0224 15:06:43.589434   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:43.589650   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:44.083243   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:44.083259   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:44.083266   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:44.083272   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:44.086390   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:44.086405   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:44.086421   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:44.086434   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:44 GMT
	I0224 15:06:44.086440   32699 round_trippers.go:580]     Audit-Id: cf12a4e3-9bbc-4884-896d-3255641a3fb3
	I0224 15:06:44.086445   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:44.086450   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:44.086455   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:44.086521   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:44.086804   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:44.086811   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:44.086818   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:44.086824   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:44.089386   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:44.089401   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:44.089412   32699 round_trippers.go:580]     Audit-Id: e20c3ab3-1181-4fe7-a101-34b6d78a33e9
	I0224 15:06:44.089420   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:44.089427   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:44.089433   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:44.089437   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:44.089447   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:44 GMT
	I0224 15:06:44.089516   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:44.583428   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:44.583441   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:44.583448   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:44.583453   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:44.586688   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:44.586700   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:44.586707   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:44 GMT
	I0224 15:06:44.586712   32699 round_trippers.go:580]     Audit-Id: 342f699a-2d42-431e-9db3-f160a9cf3906
	I0224 15:06:44.586716   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:44.586721   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:44.586726   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:44.586731   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:44.586821   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:44.587144   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:44.587151   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:44.587157   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:44.587165   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:44.589840   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:44.589854   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:44.589861   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:44.589867   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:44.589873   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:44 GMT
	I0224 15:06:44.589882   32699 round_trippers.go:580]     Audit-Id: a5244e89-b702-409f-8a50-bb06ce14c86f
	I0224 15:06:44.589890   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:44.589895   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:44.589998   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:45.083486   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:45.083514   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:45.083527   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:45.083537   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:45.088220   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:45.088233   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:45.088239   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:45 GMT
	I0224 15:06:45.088247   32699 round_trippers.go:580]     Audit-Id: bdd0aa8d-81c7-47a0-89ef-a153b5cf6040
	I0224 15:06:45.088252   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:45.088256   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:45.088261   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:45.088267   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:45.088331   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:45.088620   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:45.088627   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:45.088633   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:45.088638   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:45.093148   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:45.093158   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:45.093164   32699 round_trippers.go:580]     Audit-Id: 65c1eb15-e3ee-4482-bcc8-edc840924893
	I0224 15:06:45.093168   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:45.093173   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:45.093177   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:45.093184   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:45.093189   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:45 GMT
	I0224 15:06:45.093561   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:45.583458   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:45.583473   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:45.583480   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:45.583486   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:45.586257   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:45.586272   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:45.586281   32699 round_trippers.go:580]     Audit-Id: 8cad5767-41cc-4996-98d5-2a50ce2f782b
	I0224 15:06:45.586288   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:45.586295   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:45.586302   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:45.586311   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:45.586324   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:45 GMT
	I0224 15:06:45.586435   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:45.586757   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:45.586766   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:45.586777   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:45.586790   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:45.588737   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:45.588748   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:45.588757   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:45.588764   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:45.588771   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:45 GMT
	I0224 15:06:45.588779   32699 round_trippers.go:580]     Audit-Id: 8181b63b-f365-4dc0-bd1a-86402dd6ca1a
	I0224 15:06:45.588786   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:45.588793   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:45.588885   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:46.084576   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:46.084597   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:46.084607   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:46.084615   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:46.087118   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:46.087132   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:46.087141   32699 round_trippers.go:580]     Audit-Id: 9f851fee-fea8-48b3-9fc9-d7ee9557c3a7
	I0224 15:06:46.087150   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:46.087161   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:46.087166   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:46.087171   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:46.087176   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:46 GMT
	I0224 15:06:46.087415   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:46.087702   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:46.087709   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:46.087716   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:46.087723   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:46.089997   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:46.090008   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:46.090017   32699 round_trippers.go:580]     Audit-Id: fce364b8-d2a3-4754-bb49-50ef8609511b
	I0224 15:06:46.090023   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:46.090034   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:46.090041   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:46.090047   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:46.090051   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:46 GMT
	I0224 15:06:46.090166   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:46.090390   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:46.583414   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:46.583429   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:46.583435   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:46.583441   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:46.586500   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:46.586514   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:46.586520   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:46.586526   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:46 GMT
	I0224 15:06:46.586534   32699 round_trippers.go:580]     Audit-Id: 857253ed-efc2-4dfa-ac67-d17f3872ce5b
	I0224 15:06:46.586540   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:46.586545   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:46.586552   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:46.586619   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:46.586922   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:46.586929   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:46.586935   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:46.586940   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:46.589529   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:46.589544   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:46.589556   32699 round_trippers.go:580]     Audit-Id: ccbe3dc1-9897-4756-828f-980280e97779
	I0224 15:06:46.589567   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:46.589584   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:46.589595   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:46.589614   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:46.589623   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:46 GMT
	I0224 15:06:46.589705   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:47.083407   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:47.083423   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:47.083434   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:47.083446   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:47.086263   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:47.086277   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:47.086285   32699 round_trippers.go:580]     Audit-Id: 3ad6c614-f9a7-4c7c-a180-ce9dd02e9ee8
	I0224 15:06:47.086293   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:47.086299   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:47.086305   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:47.086310   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:47.086315   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:47 GMT
	I0224 15:06:47.086375   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:47.086668   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:47.086676   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:47.086684   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:47.086692   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:47.088869   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:47.088890   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:47.088903   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:47 GMT
	I0224 15:06:47.088917   32699 round_trippers.go:580]     Audit-Id: c106395a-f7ec-4a32-b3a7-c37d81699edc
	I0224 15:06:47.088930   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:47.088939   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:47.088946   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:47.088955   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:47.089364   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:47.583369   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:47.583387   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:47.583420   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:47.583429   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:47.586003   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:47.586018   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:47.586027   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:47.586037   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:47.586050   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:47.586059   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:47 GMT
	I0224 15:06:47.586076   32699 round_trippers.go:580]     Audit-Id: bc728c5f-e7f8-471a-96bf-dc85feaafacc
	I0224 15:06:47.586085   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:47.586230   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:47.586511   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:47.586517   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:47.586524   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:47.586529   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:47.588859   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:47.588871   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:47.588878   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:47.588883   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:47 GMT
	I0224 15:06:47.588889   32699 round_trippers.go:580]     Audit-Id: c35753ed-ba23-424d-82ca-761877cf2eaf
	I0224 15:06:47.588893   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:47.588899   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:47.588904   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:47.589020   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:48.083414   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:48.083430   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:48.083468   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:48.083479   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:48.086645   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:48.086657   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:48.086663   32699 round_trippers.go:580]     Audit-Id: 69f5dd73-b4f6-4dc7-9954-3182aa53c2ad
	I0224 15:06:48.086668   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:48.086675   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:48.086682   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:48.086686   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:48.086691   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:48 GMT
	I0224 15:06:48.086919   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:48.087218   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:48.087226   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:48.087232   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:48.087237   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:48.089722   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:48.089734   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:48.089739   32699 round_trippers.go:580]     Audit-Id: a542bc98-af07-4fe3-9809-b08232980f34
	I0224 15:06:48.089744   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:48.089749   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:48.089754   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:48.089759   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:48.089764   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:48 GMT
	I0224 15:06:48.089836   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:48.583405   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:48.583423   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:48.583430   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:48.583464   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:48.586378   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:48.586391   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:48.586400   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:48 GMT
	I0224 15:06:48.586405   32699 round_trippers.go:580]     Audit-Id: 9306eae6-b3be-4e16-9324-cb841e563fd7
	I0224 15:06:48.586410   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:48.586415   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:48.586421   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:48.586426   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:48.586501   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"392","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0224 15:06:48.586812   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:48.586821   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:48.586827   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:48.586832   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:48.589322   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:48.589338   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:48.589345   32699 round_trippers.go:580]     Audit-Id: 364c83b4-06e1-4f4b-9c23-cb93113ff450
	I0224 15:06:48.589350   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:48.589360   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:48.589370   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:48.589377   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:48.589384   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:48 GMT
	I0224 15:06:48.589926   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:48.590224   32699 pod_ready.go:102] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"False"
	I0224 15:06:49.083424   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:06:49.083441   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.083447   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.083453   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.086189   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.086200   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.086206   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.086212   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.086217   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.086224   32699 round_trippers.go:580]     Audit-Id: 62e6b8f7-6848-4c98-89f5-c4dd996da150
	I0224 15:06:49.086232   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.086237   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.086506   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0224 15:06:49.086800   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.086806   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.086812   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.086818   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.088919   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.088929   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.088935   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.088942   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.088953   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.088965   32699 round_trippers.go:580]     Audit-Id: 350eb7d4-fcdc-4dcb-9cdf-dc49beeb7c0d
	I0224 15:06:49.088976   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.088985   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.089201   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.089391   32699 pod_ready.go:92] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.089404   32699 pod_ready.go:81] duration metric: took 15.012594069s waiting for pod "coredns-787d4945fb-qfqth" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.089418   32699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-tkkfd" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.089456   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-tkkfd
	I0224 15:06:49.089461   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.089467   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.089472   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.091828   32699 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0224 15:06:49.091838   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.091844   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.091849   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.091854   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.091859   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.091864   32699 round_trippers.go:580]     Content-Length: 216
	I0224 15:06:49.091870   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.091875   32699 round_trippers.go:580]     Audit-Id: bbe3c541-f7e2-46f5-8cdc-5f2937304e1c
	I0224 15:06:49.091889   32699 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-tkkfd\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-tkkfd","kind":"pods"},"code":404}
	I0224 15:06:49.092009   32699 pod_ready.go:97] error getting pod "coredns-787d4945fb-tkkfd" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-tkkfd" not found
	I0224 15:06:49.092016   32699 pod_ready.go:81] duration metric: took 2.59101ms waiting for pod "coredns-787d4945fb-tkkfd" in "kube-system" namespace to be "Ready" ...
	E0224 15:06:49.092022   32699 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-tkkfd" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-tkkfd" not found
	I0224 15:06:49.092026   32699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.092058   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/etcd-multinode-358000
	I0224 15:06:49.092062   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.092068   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.092074   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.094081   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:49.094091   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.094097   32699 round_trippers.go:580]     Audit-Id: fb3e70a9-4c35-489e-abbc-f5f45ee3eeb1
	I0224 15:06:49.094102   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.094107   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.094112   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.094117   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.094122   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.094168   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-358000","namespace":"kube-system","uid":"cae08591-19d2-4e50-ba6b-73cf4552218c","resourceVersion":"282","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f0e985c72569733baf436fab6e966c50","kubernetes.io/config.mirror":"f0e985c72569733baf436fab6e966c50","kubernetes.io/config.seen":"2023-02-24T23:06:20.399469529Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0224 15:06:49.094397   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.094403   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.094409   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.094414   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.096860   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.096871   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.096877   32699 round_trippers.go:580]     Audit-Id: f950b3a3-437f-4ed0-b111-65a481c05b81
	I0224 15:06:49.096883   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.096888   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.096893   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.096898   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.096903   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.097038   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.097224   32699 pod_ready.go:92] pod "etcd-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.097229   32699 pod_ready.go:81] duration metric: took 5.198124ms waiting for pod "etcd-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.097236   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.097265   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-358000
	I0224 15:06:49.097270   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.097275   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.097282   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.099874   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.099887   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.099895   32699 round_trippers.go:580]     Audit-Id: f03e93ef-890b-4c13-9d3b-38d71ca34966
	I0224 15:06:49.099904   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.099909   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.099915   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.099920   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.099925   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.099995   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-358000","namespace":"kube-system","uid":"9f99728a-c30f-46f0-aa6c-914ce4f95c85","resourceVersion":"385","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"59d82eb7ee48c62b0e1b1157e56efad8","kubernetes.io/config.mirror":"59d82eb7ee48c62b0e1b1157e56efad8","kubernetes.io/config.seen":"2023-02-24T23:06:20.399481307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0224 15:06:49.100269   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.100275   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.100281   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.100287   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.102487   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.102497   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.102503   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.102508   32699 round_trippers.go:580]     Audit-Id: f413e262-abcf-4002-86d8-553b3ac7c508
	I0224 15:06:49.102516   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.102521   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.102526   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.102531   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.102634   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.102828   32699 pod_ready.go:92] pod "kube-apiserver-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.102836   32699 pod_ready.go:81] duration metric: took 5.594382ms waiting for pod "kube-apiserver-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.102842   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.102883   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-358000
	I0224 15:06:49.102890   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.102908   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.102917   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.105312   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.105322   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.105327   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.105332   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.105338   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.105342   32699 round_trippers.go:580]     Audit-Id: aa07d30a-3b52-4495-8b48-ed59f36ae7c8
	I0224 15:06:49.105349   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.105357   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.105441   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-358000","namespace":"kube-system","uid":"6d26b160-2631-4696-9633-0da5de0f9e6c","resourceVersion":"284","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"caa2ad2bdf9da4ce166e0faa9958c6ef","kubernetes.io/config.mirror":"caa2ad2bdf9da4ce166e0faa9958c6ef","kubernetes.io/config.seen":"2023-02-24T23:06:20.399482388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0224 15:06:49.105707   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.105713   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.105718   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.105723   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.108015   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.108028   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.108035   32699 round_trippers.go:580]     Audit-Id: b5961f1c-25ec-41f1-ae7d-2f8099da22f3
	I0224 15:06:49.108056   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.108064   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.108068   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.108073   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.108078   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.108182   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.108379   32699 pod_ready.go:92] pod "kube-controller-manager-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.108387   32699 pod_ready.go:81] duration metric: took 5.538342ms waiting for pod "kube-controller-manager-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.108395   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rsf5q" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.108429   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-rsf5q
	I0224 15:06:49.108433   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.108439   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.108445   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.110552   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.110570   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.110581   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.110591   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.110603   32699 round_trippers.go:580]     Audit-Id: e0a50677-0fe7-4a42-93bb-c7431a7273bd
	I0224 15:06:49.110611   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.110619   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.110624   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.110680   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rsf5q","generateName":"kube-proxy-","namespace":"kube-system","uid":"34fab1a9-3416-47c1-9239-d7276b496a73","resourceVersion":"389","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0224 15:06:49.284894   32699 request.go:622] Waited for 173.950227ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.284955   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.284963   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.284973   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.284981   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.287797   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:06:49.287808   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.287814   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.287819   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.287824   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.287829   32699 round_trippers.go:580]     Audit-Id: 2f9d3be0-92bc-4008-95fd-a502340f4527
	I0224 15:06:49.287834   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.287839   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.287912   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.288105   32699 pod_ready.go:92] pod "kube-proxy-rsf5q" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.288111   32699 pod_ready.go:81] duration metric: took 179.70588ms waiting for pod "kube-proxy-rsf5q" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.288117   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.483832   32699 request.go:622] Waited for 195.668979ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-358000
	I0224 15:06:49.483891   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-358000
	I0224 15:06:49.483926   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.483938   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.483956   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.487985   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:49.487996   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.488001   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.488012   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.488017   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.488022   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.488027   32699 round_trippers.go:580]     Audit-Id: 26299cf4-e251-46bc-b002-c0918acae9e0
	I0224 15:06:49.488032   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.488089   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-358000","namespace":"kube-system","uid":"f1b648f4-a02a-4931-a791-578a6dba081f","resourceVersion":"281","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1df55cdda41cb4d1ff214c8ea21fdc45","kubernetes.io/config.mirror":"1df55cdda41cb4d1ff214c8ea21fdc45","kubernetes.io/config.seen":"2023-02-24T23:06:20.399486321Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0224 15:06:49.683832   32699 request.go:622] Waited for 195.495235ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.683919   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:06:49.683928   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.683940   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.683950   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.687895   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:49.687905   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.687911   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.687916   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.687921   32699 round_trippers.go:580]     Audit-Id: f7df7291-7f4f-4c6d-96b0-dddf5f5dc535
	I0224 15:06:49.687926   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.687931   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.687936   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.687989   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0224 15:06:49.688194   32699 pod_ready.go:92] pod "kube-scheduler-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:06:49.688200   32699 pod_ready.go:81] duration metric: took 400.066577ms waiting for pod "kube-scheduler-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:06:49.688206   32699 pod_ready.go:38] duration metric: took 15.619991597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:06:49.688220   32699 api_server.go:51] waiting for apiserver process to appear ...
	I0224 15:06:49.688277   32699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:06:49.697953   32699 command_runner.go:130] > 1929
	I0224 15:06:49.698643   32699 api_server.go:71] duration metric: took 16.043064121s to wait for apiserver process to appear ...
	I0224 15:06:49.698653   32699 api_server.go:87] waiting for apiserver healthz status ...
	I0224 15:06:49.698664   32699 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58093/healthz ...
	I0224 15:06:49.704202   32699 api_server.go:278] https://127.0.0.1:58093/healthz returned 200:
	ok
	I0224 15:06:49.704236   32699 round_trippers.go:463] GET https://127.0.0.1:58093/version
	I0224 15:06:49.704241   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.704247   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.704253   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.705598   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:06:49.705607   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.705613   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.705618   32699 round_trippers.go:580]     Audit-Id: 8c19920b-abe0-425e-8f0f-3180324a9838
	I0224 15:06:49.705623   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.705628   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.705633   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.705638   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.705647   32699 round_trippers.go:580]     Content-Length: 263
	I0224 15:06:49.705656   32699 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0224 15:06:49.705701   32699 api_server.go:140] control plane version: v1.26.1
	I0224 15:06:49.705708   32699 api_server.go:130] duration metric: took 7.050391ms to wait for apiserver health ...
	I0224 15:06:49.705718   32699 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 15:06:49.884067   32699 request.go:622] Waited for 178.291018ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:49.884153   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:49.884166   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:49.884183   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:49.884200   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:49.889202   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:49.889214   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:49.889220   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:49.889225   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:49.889246   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:49 GMT
	I0224 15:06:49.889255   32699 round_trippers.go:580]     Audit-Id: 6a368721-9803-433e-9b62-65240f2912d3
	I0224 15:06:49.889262   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:49.889268   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:49.890070   32699 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0224 15:06:49.891331   32699 system_pods.go:59] 8 kube-system pods found
	I0224 15:06:49.891341   32699 system_pods.go:61] "coredns-787d4945fb-qfqth" [e37f5c65-d431-4ae8-9447-b6d61ee81dcd] Running
	I0224 15:06:49.891346   32699 system_pods.go:61] "etcd-multinode-358000" [cae08591-19d2-4e50-ba6b-73cf4552218c] Running
	I0224 15:06:49.891349   32699 system_pods.go:61] "kindnet-894f4" [75e84b3d-db2e-44fe-8674-95848e8b8051] Running
	I0224 15:06:49.891353   32699 system_pods.go:61] "kube-apiserver-multinode-358000" [9f99728a-c30f-46f0-aa6c-914ce4f95c85] Running
	I0224 15:06:49.891357   32699 system_pods.go:61] "kube-controller-manager-multinode-358000" [6d26b160-2631-4696-9633-0da5de0f9e6c] Running
	I0224 15:06:49.891361   32699 system_pods.go:61] "kube-proxy-rsf5q" [34fab1a9-3416-47c1-9239-d7276b496a73] Running
	I0224 15:06:49.891366   32699 system_pods.go:61] "kube-scheduler-multinode-358000" [f1b648f4-a02a-4931-a791-578a6dba081f] Running
	I0224 15:06:49.891370   32699 system_pods.go:61] "storage-provisioner" [ae236ae0-e586-40c5-804d-f33bc98c250a] Running
	I0224 15:06:49.891388   32699 system_pods.go:74] duration metric: took 185.659294ms to wait for pod list to return data ...
	I0224 15:06:49.891397   32699 default_sa.go:34] waiting for default service account to be created ...
	I0224 15:06:50.083623   32699 request.go:622] Waited for 192.174934ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/default/serviceaccounts
	I0224 15:06:50.083673   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/default/serviceaccounts
	I0224 15:06:50.083680   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:50.083692   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:50.083745   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:50.088216   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:06:50.088226   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:50.088232   32699 round_trippers.go:580]     Audit-Id: 7ce2d208-e05a-40a1-a64a-6d2760c6594a
	I0224 15:06:50.088237   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:50.088242   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:50.088247   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:50.088252   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:50.088257   32699 round_trippers.go:580]     Content-Length: 261
	I0224 15:06:50.088263   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:50 GMT
	I0224 15:06:50.088277   32699 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"350a72df-ca83-4097-88f4-faff47ce9565","resourceVersion":"304","creationTimestamp":"2023-02-24T23:06:32Z"}}]}
	I0224 15:06:50.088397   32699 default_sa.go:45] found service account: "default"
	I0224 15:06:50.088404   32699 default_sa.go:55] duration metric: took 196.995923ms for default service account to be created ...
	I0224 15:06:50.088410   32699 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 15:06:50.283932   32699 request.go:622] Waited for 195.475279ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:50.284027   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:06:50.284037   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:50.284050   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:50.284060   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:50.289630   32699 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0224 15:06:50.289643   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:50.289649   32699 round_trippers.go:580]     Audit-Id: 8c99a1bd-af97-464c-907e-20d9c4d3df13
	I0224 15:06:50.289654   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:50.289658   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:50.289663   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:50.289668   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:50.289675   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:50 GMT
	I0224 15:06:50.290064   32699 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0224 15:06:50.291326   32699 system_pods.go:86] 8 kube-system pods found
	I0224 15:06:50.291336   32699 system_pods.go:89] "coredns-787d4945fb-qfqth" [e37f5c65-d431-4ae8-9447-b6d61ee81dcd] Running
	I0224 15:06:50.291340   32699 system_pods.go:89] "etcd-multinode-358000" [cae08591-19d2-4e50-ba6b-73cf4552218c] Running
	I0224 15:06:50.291344   32699 system_pods.go:89] "kindnet-894f4" [75e84b3d-db2e-44fe-8674-95848e8b8051] Running
	I0224 15:06:50.291348   32699 system_pods.go:89] "kube-apiserver-multinode-358000" [9f99728a-c30f-46f0-aa6c-914ce4f95c85] Running
	I0224 15:06:50.291352   32699 system_pods.go:89] "kube-controller-manager-multinode-358000" [6d26b160-2631-4696-9633-0da5de0f9e6c] Running
	I0224 15:06:50.291356   32699 system_pods.go:89] "kube-proxy-rsf5q" [34fab1a9-3416-47c1-9239-d7276b496a73] Running
	I0224 15:06:50.291360   32699 system_pods.go:89] "kube-scheduler-multinode-358000" [f1b648f4-a02a-4931-a791-578a6dba081f] Running
	I0224 15:06:50.291363   32699 system_pods.go:89] "storage-provisioner" [ae236ae0-e586-40c5-804d-f33bc98c250a] Running
	I0224 15:06:50.291368   32699 system_pods.go:126] duration metric: took 202.947965ms to wait for k8s-apps to be running ...
	I0224 15:06:50.291373   32699 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 15:06:50.291413   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:06:50.301704   32699 system_svc.go:56] duration metric: took 10.326484ms WaitForService to wait for kubelet.
	I0224 15:06:50.301718   32699 kubeadm.go:578] duration metric: took 16.646122501s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 15:06:50.301734   32699 node_conditions.go:102] verifying NodePressure condition ...
	I0224 15:06:50.483579   32699 request.go:622] Waited for 181.795065ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes
	I0224 15:06:50.483665   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes
	I0224 15:06:50.483676   32699 round_trippers.go:469] Request Headers:
	I0224 15:06:50.483687   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:06:50.483698   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:06:50.487452   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:06:50.487463   32699 round_trippers.go:577] Response Headers:
	I0224 15:06:50.487469   32699 round_trippers.go:580]     Audit-Id: 6264b331-4a1a-48bc-9f57-ac56e028901a
	I0224 15:06:50.487474   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:06:50.487481   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:06:50.487487   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:06:50.487491   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:06:50.487496   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:06:50 GMT
	I0224 15:06:50.487569   32699 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"404","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5007 chars]
	I0224 15:06:50.487794   32699 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:06:50.487807   32699 node_conditions.go:123] node cpu capacity is 6
	I0224 15:06:50.487817   32699 node_conditions.go:105] duration metric: took 186.072966ms to run NodePressure ...
	I0224 15:06:50.487827   32699 start.go:228] waiting for startup goroutines ...
	I0224 15:06:50.487833   32699 start.go:233] waiting for cluster config update ...
	I0224 15:06:50.487843   32699 start.go:242] writing updated cluster config ...
	I0224 15:06:50.509489   32699 out.go:177] 
	I0224 15:06:50.530894   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:06:50.531005   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:06:50.553416   32699 out.go:177] * Starting worker node multinode-358000-m02 in cluster multinode-358000
	I0224 15:06:50.575234   32699 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:06:50.596258   32699 out.go:177] * Pulling base image ...
	I0224 15:06:50.638414   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:06:50.638456   32699 cache.go:57] Caching tarball of preloaded images
	I0224 15:06:50.638416   32699 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:06:50.638651   32699 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:06:50.638672   32699 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 15:06:50.638778   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:06:50.696096   32699 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:06:50.696131   32699 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:06:50.696154   32699 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:06:50.696186   32699 start.go:364] acquiring machines lock for multinode-358000-m02: {Name:mk956cff82cb268a03a2fa83764d58115b1b74f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:06:50.696338   32699 start.go:368] acquired machines lock for "multinode-358000-m02" in 140.575µs
	I0224 15:06:50.696377   32699 start.go:93] Provisioning new machine with config: &{Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 15:06:50.696449   32699 start.go:125] createHost starting for "m02" (driver="docker")
	I0224 15:06:50.718122   32699 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 15:06:50.718286   32699 start.go:159] libmachine.API.Create for "multinode-358000" (driver="docker")
	I0224 15:06:50.718323   32699 client.go:168] LocalClient.Create starting
	I0224 15:06:50.718519   32699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
	I0224 15:06:50.718610   32699 main.go:141] libmachine: Decoding PEM data...
	I0224 15:06:50.718635   32699 main.go:141] libmachine: Parsing certificate...
	I0224 15:06:50.718728   32699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
	I0224 15:06:50.718792   32699 main.go:141] libmachine: Decoding PEM data...
	I0224 15:06:50.718807   32699 main.go:141] libmachine: Parsing certificate...
	I0224 15:06:50.739049   32699 cli_runner.go:164] Run: docker network inspect multinode-358000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 15:06:50.795271   32699 network_create.go:76] Found existing network {name:multinode-358000 subnet:0xc00169c3c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0224 15:06:50.795309   32699 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-358000-m02" container
	I0224 15:06:50.795431   32699 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 15:06:50.852310   32699 cli_runner.go:164] Run: docker volume create multinode-358000-m02 --label name.minikube.sigs.k8s.io=multinode-358000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0224 15:06:50.907785   32699 oci.go:103] Successfully created a docker volume multinode-358000-m02
	I0224 15:06:50.907904   32699 cli_runner.go:164] Run: docker run --rm --name multinode-358000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-358000-m02 --entrypoint /usr/bin/test -v multinode-358000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 15:06:51.344944   32699 oci.go:107] Successfully prepared a docker volume multinode-358000-m02
	I0224 15:06:51.344977   32699 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:06:51.344989   32699 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 15:06:51.345107   32699 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-358000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 15:06:57.969175   32699 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-358000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.623816766s)
	I0224 15:06:57.969197   32699 kic.go:199] duration metric: took 6.624007 seconds to extract preloaded images to volume
	I0224 15:06:57.969310   32699 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 15:06:58.114499   32699 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-358000-m02 --name multinode-358000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-358000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-358000-m02 --network multinode-358000 --ip 192.168.58.3 --volume multinode-358000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 15:06:58.466019   32699 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Running}}
	I0224 15:06:58.531727   32699 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Status}}
	I0224 15:06:58.596177   32699 cli_runner.go:164] Run: docker exec multinode-358000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0224 15:06:58.713610   32699 oci.go:144] the created container "multinode-358000-m02" has a running status.
	I0224 15:06:58.713643   32699 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa...
	I0224 15:06:58.902350   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0224 15:06:58.902415   32699 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 15:06:59.007556   32699 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Status}}
	I0224 15:06:59.070105   32699 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 15:06:59.070124   32699 kic_runner.go:114] Args: [docker exec --privileged multinode-358000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 15:06:59.173186   32699 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Status}}
	I0224 15:06:59.231411   32699 machine.go:88] provisioning docker machine ...
	I0224 15:06:59.231457   32699 ubuntu.go:169] provisioning hostname "multinode-358000-m02"
	I0224 15:06:59.231559   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:06:59.291513   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:59.291904   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:06:59.291918   32699 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-358000-m02 && echo "multinode-358000-m02" | sudo tee /etc/hostname
	I0224 15:06:59.436543   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-358000-m02
	
	I0224 15:06:59.436633   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:06:59.495237   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:06:59.495598   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:06:59.495615   32699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-358000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-358000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-358000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:06:59.630904   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:06:59.630930   32699 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:06:59.630949   32699 ubuntu.go:177] setting up certificates
	I0224 15:06:59.630959   32699 provision.go:83] configureAuth start
	I0224 15:06:59.631042   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000-m02
	I0224 15:06:59.688571   32699 provision.go:138] copyHostCerts
	I0224 15:06:59.688627   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:06:59.688699   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:06:59.688704   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:06:59.688845   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:06:59.689012   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:06:59.689046   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:06:59.689051   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:06:59.689116   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:06:59.689236   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:06:59.689280   32699 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:06:59.689284   32699 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:06:59.689348   32699 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:06:59.689469   32699 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.multinode-358000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-358000-m02]
	I0224 15:06:59.878774   32699 provision.go:172] copyRemoteCerts
	I0224 15:06:59.878846   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:06:59.878903   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:06:59.936002   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:00.031564   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0224 15:07:00.031644   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:07:00.049713   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0224 15:07:00.049807   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0224 15:07:00.067036   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0224 15:07:00.067123   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 15:07:00.084636   32699 provision.go:86] duration metric: configureAuth took 453.653262ms
	I0224 15:07:00.084649   32699 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:07:00.084805   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:07:00.084870   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:00.143243   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:07:00.143586   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:07:00.143597   32699 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:07:00.278545   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:07:00.278557   32699 ubuntu.go:71] root file system type: overlay
	I0224 15:07:00.278649   32699 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:07:00.278747   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:00.338973   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:07:00.339338   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:07:00.339388   32699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:07:00.482878   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:07:00.482984   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:00.543590   32699 main.go:141] libmachine: Using SSH client type: native
	I0224 15:07:00.543961   32699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 58163 <nil> <nil>}
	I0224 15:07:00.543984   32699 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:07:01.181738   32699 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:07:00.480379297 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 15:07:01.181761   32699 machine.go:91] provisioned docker machine in 1.95026147s
	I0224 15:07:01.181767   32699 client.go:171] LocalClient.Create took 10.463124678s
	I0224 15:07:01.181784   32699 start.go:167] duration metric: libmachine.API.Create for "multinode-358000" took 10.463186209s
	I0224 15:07:01.181790   32699 start.go:300] post-start starting for "multinode-358000-m02" (driver="docker")
	I0224 15:07:01.181795   32699 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:07:01.181875   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:07:01.181930   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:01.241583   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:01.338747   32699 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:07:01.343201   32699 command_runner.go:130] > NAME="Ubuntu"
	I0224 15:07:01.343212   32699 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0224 15:07:01.343216   32699 command_runner.go:130] > ID=ubuntu
	I0224 15:07:01.343220   32699 command_runner.go:130] > ID_LIKE=debian
	I0224 15:07:01.343225   32699 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0224 15:07:01.343229   32699 command_runner.go:130] > VERSION_ID="20.04"
	I0224 15:07:01.343233   32699 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0224 15:07:01.343237   32699 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0224 15:07:01.343242   32699 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0224 15:07:01.343248   32699 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0224 15:07:01.343256   32699 command_runner.go:130] > VERSION_CODENAME=focal
	I0224 15:07:01.343260   32699 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0224 15:07:01.343307   32699 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:07:01.343323   32699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:07:01.343329   32699 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:07:01.343334   32699 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:07:01.343341   32699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:07:01.343441   32699 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:07:01.343604   32699 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:07:01.343610   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /etc/ssl/certs/268712.pem
	I0224 15:07:01.343794   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:07:01.351806   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:07:01.371381   32699 start.go:303] post-start completed in 189.577003ms
	I0224 15:07:01.371906   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000-m02
	I0224 15:07:01.430527   32699 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/config.json ...
	I0224 15:07:01.430959   32699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:07:01.431019   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:01.490241   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:01.582412   32699 command_runner.go:130] > 6%!
	(MISSING)I0224 15:07:01.582507   32699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:07:01.586996   32699 command_runner.go:130] > 92G
	I0224 15:07:01.587317   32699 start.go:128] duration metric: createHost completed in 10.890535302s
	I0224 15:07:01.587329   32699 start.go:83] releasing machines lock for "multinode-358000-m02", held for 10.890657634s
	I0224 15:07:01.587425   32699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000-m02
	I0224 15:07:01.670143   32699 out.go:177] * Found network options:
	I0224 15:07:01.691081   32699 out.go:177]   - NO_PROXY=192.168.58.2
	W0224 15:07:01.729153   32699 proxy.go:119] fail to check proxy env: Error ip not in block
	W0224 15:07:01.729188   32699 proxy.go:119] fail to check proxy env: Error ip not in block
	I0224 15:07:01.729282   32699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 15:07:01.729333   32699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 15:07:01.729352   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:01.729425   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:07:01.792549   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:01.792604   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:07:01.885889   32699 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0224 15:07:01.885906   32699 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0224 15:07:01.885911   32699 command_runner.go:130] > Device: f2h/242d	Inode: 2885207     Links: 1
	I0224 15:07:01.885916   32699 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 15:07:01.885927   32699 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0224 15:07:01.885940   32699 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0224 15:07:01.885953   32699 command_runner.go:130] > Change: 2023-02-24 22:41:56.862825099 +0000
	I0224 15:07:01.885963   32699 command_runner.go:130] >  Birth: -
	I0224 15:07:01.886033   32699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 15:07:01.938812   32699 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0224 15:07:01.938851   32699 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 15:07:01.938913   32699 ssh_runner.go:195] Run: which cri-dockerd
	I0224 15:07:01.943256   32699 command_runner.go:130] > /usr/bin/cri-dockerd
	I0224 15:07:01.943385   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 15:07:01.951642   32699 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 15:07:01.964604   32699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 15:07:01.979412   32699 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0224 15:07:01.979438   32699 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 15:07:01.979445   32699 start.go:485] detecting cgroup driver to use...
	I0224 15:07:01.979456   32699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:07:01.979532   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:07:01.992106   32699 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0224 15:07:01.992119   32699 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0224 15:07:01.992867   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 15:07:02.001390   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:07:02.010157   32699 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:07:02.010219   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:07:02.018937   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:07:02.027400   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:07:02.035890   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:07:02.044374   32699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:07:02.052607   32699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:07:02.061765   32699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:07:02.068551   32699 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0224 15:07:02.069298   32699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:07:02.076586   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:07:02.148349   32699 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:07:02.224253   32699 start.go:485] detecting cgroup driver to use...
	I0224 15:07:02.224271   32699 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:07:02.224330   32699 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:07:02.234831   32699 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0224 15:07:02.235098   32699 command_runner.go:130] > [Unit]
	I0224 15:07:02.235112   32699 command_runner.go:130] > Description=Docker Application Container Engine
	I0224 15:07:02.235123   32699 command_runner.go:130] > Documentation=https://docs.docker.com
	I0224 15:07:02.235132   32699 command_runner.go:130] > BindsTo=containerd.service
	I0224 15:07:02.235142   32699 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0224 15:07:02.235162   32699 command_runner.go:130] > Wants=network-online.target
	I0224 15:07:02.235172   32699 command_runner.go:130] > Requires=docker.socket
	I0224 15:07:02.235178   32699 command_runner.go:130] > StartLimitBurst=3
	I0224 15:07:02.235183   32699 command_runner.go:130] > StartLimitIntervalSec=60
	I0224 15:07:02.235200   32699 command_runner.go:130] > [Service]
	I0224 15:07:02.235203   32699 command_runner.go:130] > Type=notify
	I0224 15:07:02.235207   32699 command_runner.go:130] > Restart=on-failure
	I0224 15:07:02.235233   32699 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0224 15:07:02.235240   32699 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0224 15:07:02.235255   32699 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0224 15:07:02.235276   32699 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0224 15:07:02.235306   32699 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0224 15:07:02.235321   32699 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0224 15:07:02.235344   32699 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0224 15:07:02.235372   32699 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0224 15:07:02.235389   32699 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0224 15:07:02.235396   32699 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0224 15:07:02.235400   32699 command_runner.go:130] > ExecStart=
	I0224 15:07:02.235414   32699 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0224 15:07:02.235420   32699 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0224 15:07:02.235426   32699 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0224 15:07:02.235431   32699 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0224 15:07:02.235434   32699 command_runner.go:130] > LimitNOFILE=infinity
	I0224 15:07:02.235438   32699 command_runner.go:130] > LimitNPROC=infinity
	I0224 15:07:02.235443   32699 command_runner.go:130] > LimitCORE=infinity
	I0224 15:07:02.235463   32699 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0224 15:07:02.235469   32699 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0224 15:07:02.235492   32699 command_runner.go:130] > TasksMax=infinity
	I0224 15:07:02.235496   32699 command_runner.go:130] > TimeoutStartSec=0
	I0224 15:07:02.235503   32699 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0224 15:07:02.235507   32699 command_runner.go:130] > Delegate=yes
	I0224 15:07:02.235515   32699 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0224 15:07:02.235519   32699 command_runner.go:130] > KillMode=process
	I0224 15:07:02.235523   32699 command_runner.go:130] > [Install]
	I0224 15:07:02.235526   32699 command_runner.go:130] > WantedBy=multi-user.target
	I0224 15:07:02.236163   32699 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:07:02.236230   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:07:02.246828   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:07:02.261085   32699 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 15:07:02.261103   32699 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0224 15:07:02.261841   32699 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:07:02.364422   32699 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:07:02.455679   32699 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:07:02.455697   32699 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:07:02.469367   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:07:02.555761   32699 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:07:02.810811   32699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:07:02.879353   32699 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0224 15:07:02.879426   32699 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 15:07:02.952848   32699 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:07:03.021948   32699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:07:03.097835   32699 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 15:07:03.109230   32699 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 15:07:03.109310   32699 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 15:07:03.113364   32699 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0224 15:07:03.113377   32699 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0224 15:07:03.113383   32699 command_runner.go:130] > Device: 100013h/1048595d	Inode: 206         Links: 1
	I0224 15:07:03.113389   32699 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0224 15:07:03.113398   32699 command_runner.go:130] > Access: 2023-02-24 23:07:03.105379448 +0000
	I0224 15:07:03.113427   32699 command_runner.go:130] > Modify: 2023-02-24 23:07:03.105379448 +0000
	I0224 15:07:03.113435   32699 command_runner.go:130] > Change: 2023-02-24 23:07:03.106379448 +0000
	I0224 15:07:03.113439   32699 command_runner.go:130] >  Birth: -
	I0224 15:07:03.113471   32699 start.go:553] Will wait 60s for crictl version
	I0224 15:07:03.113515   32699 ssh_runner.go:195] Run: which crictl
	I0224 15:07:03.117227   32699 command_runner.go:130] > /usr/bin/crictl
	I0224 15:07:03.117280   32699 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 15:07:03.208024   32699 command_runner.go:130] > Version:  0.1.0
	I0224 15:07:03.208050   32699 command_runner.go:130] > RuntimeName:  docker
	I0224 15:07:03.208057   32699 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0224 15:07:03.208063   32699 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0224 15:07:03.210415   32699 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 15:07:03.210495   32699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:07:03.235616   32699 command_runner.go:130] > 23.0.1
	I0224 15:07:03.235698   32699 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:07:03.258598   32699 command_runner.go:130] > 23.0.1
	I0224 15:07:03.302840   32699 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 15:07:03.324952   32699 out.go:177]   - env NO_PROXY=192.168.58.2
	I0224 15:07:03.347134   32699 cli_runner.go:164] Run: docker exec -t multinode-358000-m02 dig +short host.docker.internal
	I0224 15:07:03.465504   32699 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:07:03.465611   32699 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:07:03.470058   32699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:07:03.480042   32699 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000 for IP: 192.168.58.3
	I0224 15:07:03.480069   32699 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:07:03.480254   32699 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:07:03.480333   32699 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:07:03.480344   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0224 15:07:03.480369   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0224 15:07:03.480387   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0224 15:07:03.480410   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0224 15:07:03.480501   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:07:03.480545   32699 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:07:03.480556   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:07:03.480591   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:07:03.480626   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:07:03.480657   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:07:03.480727   32699 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:07:03.480774   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.480795   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem -> /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.480814   32699 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.481228   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:07:03.498650   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:07:03.516321   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:07:03.534142   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:07:03.551720   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:07:03.569392   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:07:03.588290   32699 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:07:03.605713   32699 ssh_runner.go:195] Run: openssl version
	I0224 15:07:03.611310   32699 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0224 15:07:03.611628   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:07:03.620132   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.624143   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.624254   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.624303   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:07:03.629865   32699 command_runner.go:130] > 51391683
	I0224 15:07:03.630282   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:07:03.638632   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:07:03.646950   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.651035   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.651169   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.651217   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:07:03.656395   32699 command_runner.go:130] > 3ec20f2e
	I0224 15:07:03.656655   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:07:03.665561   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:07:03.673664   32699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.677880   32699 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.677902   32699 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.677948   32699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:07:03.683247   32699 command_runner.go:130] > b5213941
	I0224 15:07:03.683549   32699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:07:03.691777   32699 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:07:03.715388   32699 command_runner.go:130] > cgroupfs
	I0224 15:07:03.717385   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:07:03.717397   32699 cni.go:136] 2 nodes found, recommending kindnet
	I0224 15:07:03.717404   32699 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:07:03.717416   32699 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-358000 NodeName:multinode-358000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:07:03.717499   32699 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-358000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:07:03.717540   32699 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-358000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:07:03.717599   32699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 15:07:03.725322   32699 command_runner.go:130] > kubeadm
	I0224 15:07:03.725334   32699 command_runner.go:130] > kubectl
	I0224 15:07:03.725338   32699 command_runner.go:130] > kubelet
	I0224 15:07:03.726180   32699 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:07:03.726255   32699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0224 15:07:03.734917   32699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0224 15:07:03.749003   32699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:07:03.762036   32699 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:07:03.765876   32699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:07:03.775927   32699 host.go:66] Checking if "multinode-358000" exists ...
	I0224 15:07:03.776106   32699 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:07:03.776145   32699 start.go:301] JoinCluster: &{Name:multinode-358000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-358000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:07:03.776227   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0224 15:07:03.776279   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:07:03.835876   32699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:07:04.003448   32699 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token q2oruh.uu23whj99rwoor7l --discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e 
	I0224 15:07:04.003503   32699 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 15:07:04.003530   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q2oruh.uu23whj99rwoor7l --discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-358000-m02"
	I0224 15:07:04.045831   32699 command_runner.go:130] > [preflight] Running pre-flight checks
	I0224 15:07:04.160233   32699 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0224 15:07:04.160255   32699 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0224 15:07:04.187149   32699 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:07:04.187183   32699 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:07:04.187187   32699 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0224 15:07:04.264594   32699 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0224 15:07:05.777610   32699 command_runner.go:130] > This node has joined the cluster:
	I0224 15:07:05.777636   32699 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0224 15:07:05.777648   32699 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0224 15:07:05.777662   32699 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0224 15:07:05.781276   32699 command_runner.go:130] ! W0224 23:07:04.044825    1234 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 15:07:05.781302   32699 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0224 15:07:05.781314   32699 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:07:05.781329   32699 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q2oruh.uu23whj99rwoor7l --discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-358000-m02": (1.777735574s)
	I0224 15:07:05.781344   32699 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0224 15:07:05.945590   32699 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0224 15:07:05.945607   32699 start.go:303] JoinCluster complete in 2.169398252s
	I0224 15:07:05.945617   32699 cni.go:84] Creating CNI manager for ""
	I0224 15:07:05.945622   32699 cni.go:136] 2 nodes found, recommending kindnet
	I0224 15:07:05.945709   32699 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 15:07:05.950020   32699 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0224 15:07:05.950046   32699 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0224 15:07:05.950060   32699 command_runner.go:130] > Device: a6h/166d	Inode: 2757559     Links: 1
	I0224 15:07:05.950072   32699 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0224 15:07:05.950084   32699 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0224 15:07:05.950090   32699 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0224 15:07:05.950095   32699 command_runner.go:130] > Change: 2023-02-24 22:41:56.035825051 +0000
	I0224 15:07:05.950099   32699 command_runner.go:130] >  Birth: -
	I0224 15:07:05.950129   32699 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 15:07:05.950135   32699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 15:07:05.963464   32699 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 15:07:06.127021   32699 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0224 15:07:06.129389   32699 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0224 15:07:06.131235   32699 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0224 15:07:06.140421   32699 command_runner.go:130] > daemonset.apps/kindnet configured
	I0224 15:07:06.147087   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:07:06.147288   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:07:06.147532   32699 round_trippers.go:463] GET https://127.0.0.1:58093/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0224 15:07:06.147539   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.147545   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.147551   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.150202   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.150214   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.150219   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.150225   32699 round_trippers.go:580]     Audit-Id: 8e1a5a25-13e9-46f1-b668-f09239e24f6c
	I0224 15:07:06.150230   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.150236   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.150249   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.150255   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.150259   32699 round_trippers.go:580]     Content-Length: 291
	I0224 15:07:06.150271   32699 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0964618a-58cb-4193-adc5-a51a070222ce","resourceVersion":"424","creationTimestamp":"2023-02-24T23:06:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0224 15:07:06.150321   32699 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-358000" context rescaled to 1 replicas
	I0224 15:07:06.150336   32699 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0224 15:07:06.171604   32699 out.go:177] * Verifying Kubernetes components...
	I0224 15:07:06.212607   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:07:06.224599   32699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:07:06.283806   32699 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:07:06.284045   32699 kapi.go:59] client config for multinode-358000: &rest.Config{Host:"https://127.0.0.1:58093", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/multinode-358000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:07:06.284291   32699 node_ready.go:35] waiting up to 6m0s for node "multinode-358000-m02" to be "Ready" ...
	I0224 15:07:06.284342   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:06.284347   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.284353   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.284360   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.286858   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.286873   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.286879   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.286885   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.286895   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.286900   32699 round_trippers.go:580]     Audit-Id: 6e3d757f-5dcd-42b9-b4a4-18b4f86b6a72
	I0224 15:07:06.286911   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.286916   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.286995   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:06.287204   32699 node_ready.go:49] node "multinode-358000-m02" has status "Ready":"True"
	I0224 15:07:06.287210   32699 node_ready.go:38] duration metric: took 2.91073ms waiting for node "multinode-358000-m02" to be "Ready" ...
	I0224 15:07:06.287216   32699 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:07:06.287255   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods
	I0224 15:07:06.287260   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.287265   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.287271   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.290282   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:06.290292   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.290298   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.290303   32699 round_trippers.go:580]     Audit-Id: 1084d92a-e995-4176-9a86-422a1bc76ce7
	I0224 15:07:06.290310   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.290316   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.290320   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.290329   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.291639   32699 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"472"},"items":[{"metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 65541 chars]
	I0224 15:07:06.293277   32699 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-qfqth" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.293326   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-qfqth
	I0224 15:07:06.293332   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.293338   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.293343   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.296247   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.296260   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.296266   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.296270   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.296275   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.296284   32699 round_trippers.go:580]     Audit-Id: 964c98ff-e9c8-4271-aff4-38c92ddef0cb
	I0224 15:07:06.296291   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.296296   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.296358   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-qfqth","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"e37f5c65-d431-4ae8-9447-b6d61ee81dcd","resourceVersion":"419","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"98f8034c-059b-46b2-a45c-7d339806ca73","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98f8034c-059b-46b2-a45c-7d339806ca73\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0224 15:07:06.296659   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:06.296665   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.296671   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.296681   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.299036   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.299047   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.299053   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.299058   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.299066   32699 round_trippers.go:580]     Audit-Id: 4536da28-fb2e-47de-bfa3-29108a013910
	I0224 15:07:06.299073   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.299079   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.299086   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.299175   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:06.299381   32699 pod_ready.go:92] pod "coredns-787d4945fb-qfqth" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:06.299388   32699 pod_ready.go:81] duration metric: took 6.100854ms waiting for pod "coredns-787d4945fb-qfqth" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.299394   32699 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.299435   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/etcd-multinode-358000
	I0224 15:07:06.299441   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.299447   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.299452   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.301648   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.301660   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.301667   32699 round_trippers.go:580]     Audit-Id: 9c2f7345-37eb-4278-be6e-8b09f142faf9
	I0224 15:07:06.301674   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.301679   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.301685   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.301691   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.301696   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.301763   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-358000","namespace":"kube-system","uid":"cae08591-19d2-4e50-ba6b-73cf4552218c","resourceVersion":"282","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"f0e985c72569733baf436fab6e966c50","kubernetes.io/config.mirror":"f0e985c72569733baf436fab6e966c50","kubernetes.io/config.seen":"2023-02-24T23:06:20.399469529Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0224 15:07:06.302001   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:06.302008   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.302013   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.302019   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.304276   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.304286   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.304291   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.304309   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.304318   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.304323   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.304335   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.304342   32699 round_trippers.go:580]     Audit-Id: 278216a7-74a0-4953-9132-7fe06f1c8231
	I0224 15:07:06.304435   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:06.304637   32699 pod_ready.go:92] pod "etcd-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:06.304644   32699 pod_ready.go:81] duration metric: took 5.245151ms waiting for pod "etcd-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.304652   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.304683   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-358000
	I0224 15:07:06.304688   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.304694   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.304699   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.306747   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.306758   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.306776   32699 round_trippers.go:580]     Audit-Id: 6d38573a-89e7-4a39-8bcf-4f782c2ebee9
	I0224 15:07:06.306784   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.306789   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.306794   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.306799   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.306805   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.306879   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-358000","namespace":"kube-system","uid":"9f99728a-c30f-46f0-aa6c-914ce4f95c85","resourceVersion":"385","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"59d82eb7ee48c62b0e1b1157e56efad8","kubernetes.io/config.mirror":"59d82eb7ee48c62b0e1b1157e56efad8","kubernetes.io/config.seen":"2023-02-24T23:06:20.399481307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0224 15:07:06.307142   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:06.307148   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.307153   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.307159   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.309296   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.309308   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.309314   32699 round_trippers.go:580]     Audit-Id: 51a481e4-c5d0-4e1f-ba25-a55846d0a9c9
	I0224 15:07:06.309319   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.309324   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.309332   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.309337   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.309342   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.309410   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:06.309618   32699 pod_ready.go:92] pod "kube-apiserver-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:06.309624   32699 pod_ready.go:81] duration metric: took 4.966728ms waiting for pod "kube-apiserver-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.309629   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.309662   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-358000
	I0224 15:07:06.309667   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.309672   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.309678   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.311852   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.311861   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.311867   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.311874   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.311880   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.311885   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.311890   32699 round_trippers.go:580]     Audit-Id: 7e3fe730-4bb1-4bcf-bcce-b09fe46d181f
	I0224 15:07:06.311896   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.311967   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-358000","namespace":"kube-system","uid":"6d26b160-2631-4696-9633-0da5de0f9e6c","resourceVersion":"284","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"caa2ad2bdf9da4ce166e0faa9958c6ef","kubernetes.io/config.mirror":"caa2ad2bdf9da4ce166e0faa9958c6ef","kubernetes.io/config.seen":"2023-02-24T23:06:20.399482388Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0224 15:07:06.312234   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:06.312240   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.312247   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.312253   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.314475   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:06.314485   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.314490   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.314496   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.314503   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.314510   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.314516   32699 round_trippers.go:580]     Audit-Id: e20671b9-489b-4c27-ae74-d388c74639e5
	I0224 15:07:06.314521   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.314646   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:06.314818   32699 pod_ready.go:92] pod "kube-controller-manager-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:06.314824   32699 pod_ready.go:81] duration metric: took 5.189582ms waiting for pod "kube-controller-manager-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.314831   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-855bv" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:06.484613   32699 request.go:622] Waited for 169.72075ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:06.484730   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:06.484742   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.484754   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.484771   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.488656   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:06.488671   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.488679   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.488687   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.488700   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.488708   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.488714   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.488723   32699 round_trippers.go:580]     Audit-Id: f4b6075d-239b-4ecc-b833-c0355e38dcb2
	I0224 15:07:06.488795   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"460","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 4018 chars]
	I0224 15:07:06.685124   32699 request.go:622] Waited for 196.048239ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:06.685219   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:06.685231   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:06.685247   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:06.685258   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:06.689259   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:06.689277   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:06.689289   32699 round_trippers.go:580]     Audit-Id: 89f7580c-6127-46ab-9cbe-0a044089cc61
	I0224 15:07:06.689296   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:06.689304   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:06.689310   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:06.689317   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:06.689333   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:06 GMT
	I0224 15:07:06.689563   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:07.191265   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:07.191284   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:07.191309   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:07.191319   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:07.195096   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:07.195115   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:07.195124   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:07.195131   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:07.195137   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:07 GMT
	I0224 15:07:07.195144   32699 round_trippers.go:580]     Audit-Id: 39f72e14-9b49-4e46-9c4b-f0c3c0235722
	I0224 15:07:07.195149   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:07.195154   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:07.195263   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:07.195563   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:07.195573   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:07.195580   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:07.195587   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:07.199379   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:07.199399   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:07.199409   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:07.199416   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:07.199423   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:07 GMT
	I0224 15:07:07.199432   32699 round_trippers.go:580]     Audit-Id: 2d1a02b9-6659-4684-a6c9-198dcfa57521
	I0224 15:07:07.199440   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:07.199448   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:07.199530   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:07.691350   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:07.691371   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:07.691383   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:07.691393   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:07.695277   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:07.695297   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:07.695315   32699 round_trippers.go:580]     Audit-Id: a9ab69ec-8dc2-448e-9baa-f8ffd84e4fc4
	I0224 15:07:07.695323   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:07.695333   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:07.695339   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:07.695343   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:07.695349   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:07 GMT
	I0224 15:07:07.695418   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:07.695672   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:07.695678   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:07.695684   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:07.695691   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:07.697436   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:07:07.697445   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:07.697450   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:07.697456   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:07.697460   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:07.697467   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:07 GMT
	I0224 15:07:07.697472   32699 round_trippers.go:580]     Audit-Id: d4362a29-7af3-440d-83a6-1e7309470ca4
	I0224 15:07:07.697477   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:07.697587   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:08.191408   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:08.191433   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:08.191445   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:08.191455   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:08.195566   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:07:08.195580   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:08.195586   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:08.195590   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:08.195595   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:08 GMT
	I0224 15:07:08.195600   32699 round_trippers.go:580]     Audit-Id: f98f2627-f131-4820-a004-c737559e1abd
	I0224 15:07:08.195605   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:08.195611   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:08.195678   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:08.195949   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:08.195956   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:08.195962   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:08.195966   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:08.198345   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:08.198360   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:08.198368   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:08.198376   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:08 GMT
	I0224 15:07:08.198381   32699 round_trippers.go:580]     Audit-Id: 4875b40c-1275-4b35-9b4c-380f915833d1
	I0224 15:07:08.198386   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:08.198391   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:08.198396   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:08.198449   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:08.691379   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:08.691399   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:08.691411   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:08.691426   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:08.695316   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:08.695326   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:08.695332   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:08 GMT
	I0224 15:07:08.695342   32699 round_trippers.go:580]     Audit-Id: d55384d3-a37c-494b-beb5-4b7038e4fbf1
	I0224 15:07:08.695347   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:08.695352   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:08.695357   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:08.695363   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:08.695433   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:08.695729   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:08.695737   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:08.695744   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:08.695751   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:08.698071   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:08.698084   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:08.698089   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:08.698094   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:08 GMT
	I0224 15:07:08.698100   32699 round_trippers.go:580]     Audit-Id: 65d9d778-b0ae-4437-9e8e-c9aea36028dc
	I0224 15:07:08.698105   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:08.698112   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:08.698117   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:08.698382   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:08.698544   32699 pod_ready.go:102] pod "kube-proxy-855bv" in "kube-system" namespace has status "Ready":"False"
	I0224 15:07:09.191417   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:09.191442   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:09.191498   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:09.191506   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:09.194842   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:09.194852   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:09.194858   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:09 GMT
	I0224 15:07:09.194863   32699 round_trippers.go:580]     Audit-Id: 7789971f-de57-4dd3-9c30-03ed9f1005f6
	I0224 15:07:09.194868   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:09.194872   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:09.194877   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:09.194882   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:09.194944   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:09.195201   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:09.195207   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:09.195213   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:09.195218   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:09.197143   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:07:09.197152   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:09.197157   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:09.197162   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:09.197168   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:09.197172   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:09.197177   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:09 GMT
	I0224 15:07:09.197204   32699 round_trippers.go:580]     Audit-Id: 5775bf86-a0ad-45cf-92b1-49a787831366
	I0224 15:07:09.197252   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:09.691328   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:09.691343   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:09.691350   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:09.691355   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:09.694491   32699 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0224 15:07:09.694502   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:09.694508   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:09.694513   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:09.694517   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:09.694522   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:09 GMT
	I0224 15:07:09.694527   32699 round_trippers.go:580]     Audit-Id: 27ae4372-bbf9-413d-b01a-b6c955f1401b
	I0224 15:07:09.694532   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:09.694589   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"473","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0224 15:07:09.694873   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:09.694881   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:09.694887   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:09.694893   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:09.697022   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:09.697035   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:09.697041   32699 round_trippers.go:580]     Audit-Id: b0425882-56c2-44a4-b1e1-c1f0b9296433
	I0224 15:07:09.697046   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:09.697053   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:09.697060   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:09.697065   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:09.697070   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:09 GMT
	I0224 15:07:09.697130   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:10.191352   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-855bv
	I0224 15:07:10.191367   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.191388   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.191397   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.193981   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.193993   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.193999   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.194004   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.194009   32699 round_trippers.go:580]     Audit-Id: 10d1fbcf-729e-4ddb-a250-aead773308b7
	I0224 15:07:10.194014   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.194023   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.194032   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.194306   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-855bv","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bce3602-6484-41fe-b568-80be0b60645d","resourceVersion":"484","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0224 15:07:10.194581   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000-m02
	I0224 15:07:10.194587   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.194593   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.194599   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.196691   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.196699   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.196705   32699 round_trippers.go:580]     Audit-Id: ca64cdd8-d309-4dc3-8168-761455f445ed
	I0224 15:07:10.196712   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.196718   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.196722   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.196728   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.196732   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.196770   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000-m02","uid":"46d9e24b-b9f1-4fe2-a88f-200ca66f518d","resourceVersion":"472","creationTimestamp":"2023-02-24T23:07:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:07:05Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.a [truncated 4014 chars]
	I0224 15:07:10.196919   32699 pod_ready.go:92] pod "kube-proxy-855bv" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:10.196927   32699 pod_ready.go:81] duration metric: took 3.881975044s waiting for pod "kube-proxy-855bv" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.196933   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rsf5q" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.196967   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-proxy-rsf5q
	I0224 15:07:10.196972   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.196977   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.196982   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.199321   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.199330   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.199336   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.199341   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.199347   32699 round_trippers.go:580]     Audit-Id: 44258eab-9e2d-46de-baf7-b13ecd40fca8
	I0224 15:07:10.199353   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.199358   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.199363   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.199413   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rsf5q","generateName":"kube-proxy-","namespace":"kube-system","uid":"34fab1a9-3416-47c1-9239-d7276b496a73","resourceVersion":"389","creationTimestamp":"2023-02-24T23:06:33Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f229b8b9-43d3-45ca-a68f-e530015f7443","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f229b8b9-43d3-45ca-a68f-e530015f7443\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0224 15:07:10.199643   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:10.199649   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.199656   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.199661   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.201483   32699 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0224 15:07:10.201493   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.201498   32699 round_trippers.go:580]     Audit-Id: fe4c3f80-902a-4e1e-a7de-7c377863a649
	I0224 15:07:10.201504   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.201509   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.201516   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.201521   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.201528   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.201598   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:10.201784   32699 pod_ready.go:92] pod "kube-proxy-rsf5q" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:10.201789   32699 pod_ready.go:81] duration metric: took 4.852188ms waiting for pod "kube-proxy-rsf5q" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.201795   32699 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.201821   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-358000
	I0224 15:07:10.201825   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.201831   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.201836   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.204079   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.204087   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.204092   32699 round_trippers.go:580]     Audit-Id: 1da7862d-69ad-435e-8344-14f7c22bbfdc
	I0224 15:07:10.204097   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.204103   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.204107   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.204112   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.204117   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.204169   32699 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-358000","namespace":"kube-system","uid":"f1b648f4-a02a-4931-a791-578a6dba081f","resourceVersion":"281","creationTimestamp":"2023-02-24T23:06:20Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1df55cdda41cb4d1ff214c8ea21fdc45","kubernetes.io/config.mirror":"1df55cdda41cb4d1ff214c8ea21fdc45","kubernetes.io/config.seen":"2023-02-24T23:06:20.399486321Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-24T23:06:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0224 15:07:10.284513   32699 request.go:622] Waited for 80.136449ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:10.284560   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes/multinode-358000
	I0224 15:07:10.284567   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.284574   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.284580   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.287491   32699 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0224 15:07:10.287503   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.287509   32699 round_trippers.go:580]     Audit-Id: cb868c95-c0a8-4c55-a0ed-33e653c10f77
	I0224 15:07:10.287514   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.287519   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.287524   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.287528   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.287533   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.287666   32699 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-24T23:06:17Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0224 15:07:10.287873   32699 pod_ready.go:92] pod "kube-scheduler-multinode-358000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:07:10.287879   32699 pod_ready.go:81] duration metric: took 86.077674ms waiting for pod "kube-scheduler-multinode-358000" in "kube-system" namespace to be "Ready" ...
	I0224 15:07:10.287886   32699 pod_ready.go:38] duration metric: took 4.000542645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:07:10.287896   32699 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 15:07:10.287939   32699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:07:10.299727   32699 system_svc.go:56] duration metric: took 11.825115ms WaitForService to wait for kubelet.
	I0224 15:07:10.299742   32699 kubeadm.go:578] duration metric: took 4.149265905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0224 15:07:10.299756   32699 node_conditions.go:102] verifying NodePressure condition ...
	I0224 15:07:10.485185   32699 request.go:622] Waited for 185.375168ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:58093/api/v1/nodes
	I0224 15:07:10.485251   32699 round_trippers.go:463] GET https://127.0.0.1:58093/api/v1/nodes
	I0224 15:07:10.485269   32699 round_trippers.go:469] Request Headers:
	I0224 15:07:10.485285   32699 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0224 15:07:10.485297   32699 round_trippers.go:473]     Accept: application/json, */*
	I0224 15:07:10.489590   32699 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0224 15:07:10.489603   32699 round_trippers.go:577] Response Headers:
	I0224 15:07:10.489609   32699 round_trippers.go:580]     Audit-Id: b455aba7-b8f2-4217-bd03-7bbe148d9a21
	I0224 15:07:10.489614   32699 round_trippers.go:580]     Cache-Control: no-cache, private
	I0224 15:07:10.489619   32699 round_trippers.go:580]     Content-Type: application/json
	I0224 15:07:10.489626   32699 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 58d68444-b03c-45e9-b05c-27edee8e59c7
	I0224 15:07:10.489632   32699 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 42ff025a-dfbd-4091-ab4e-9b2bfd9760db
	I0224 15:07:10.489637   32699 round_trippers.go:580]     Date: Fri, 24 Feb 2023 23:07:10 GMT
	I0224 15:07:10.489738   32699 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"486"},"items":[{"metadata":{"name":"multinode-358000","uid":"57c62ec9-5016-4d00-a701-5db410582ddb","resourceVersion":"429","creationTimestamp":"2023-02-24T23:06:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-358000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"08976559d74fb9c2654733dc21cb8f9d9ec24374","minikube.k8s.io/name":"multinode-358000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_24T15_06_21_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10175 chars]
	I0224 15:07:10.490109   32699 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:07:10.490119   32699 node_conditions.go:123] node cpu capacity is 6
	I0224 15:07:10.490129   32699 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:07:10.490133   32699 node_conditions.go:123] node cpu capacity is 6
	I0224 15:07:10.490136   32699 node_conditions.go:105] duration metric: took 190.370203ms to run NodePressure ...
	I0224 15:07:10.490143   32699 start.go:228] waiting for startup goroutines ...
	I0224 15:07:10.490177   32699 start.go:242] writing updated cluster config ...
	I0224 15:07:10.490579   32699 ssh_runner.go:195] Run: rm -f paused
	I0224 15:07:10.529016   32699 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0224 15:07:10.550662   32699 out.go:177] * Done! kubectl is now configured to use "multinode-358000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 23:06:01 UTC, end at Fri 2023-02-24 23:07:24 UTC. --
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085577554Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085603026Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085613126Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085661876Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085686954Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085707186Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085753305Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085830710Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.085857726Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.086481124Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.086529964Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.087031682Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.094537926Z" level=info msg="Loading containers: start."
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.173169539Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.206022332Z" level=info msg="Loading containers: done."
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.214236565Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.214341616Z" level=info msg="Daemon has completed initialization"
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.235611005Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 23:06:05 multinode-358000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.239437575Z" level=info msg="API listen on [::]:2376"
	Feb 24 23:06:05 multinode-358000 dockerd[831]: time="2023-02-24T23:06:05.244656211Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 24 23:06:47 multinode-358000 dockerd[831]: time="2023-02-24T23:06:47.972912411Z" level=info msg="ignoring event" container=0739052e07226c5f180b03d44a1d09595b6474cfeac246dd1800195c74339f9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:06:48 multinode-358000 dockerd[831]: time="2023-02-24T23:06:48.086284897Z" level=info msg="ignoring event" container=a0cf71fcfe55092a281a02d546d3a236123195e4007be424f5e9784c12f57587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:06:48 multinode-358000 dockerd[831]: time="2023-02-24T23:06:48.486771907Z" level=info msg="ignoring event" container=91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:06:48 multinode-358000 dockerd[831]: time="2023-02-24T23:06:48.594340397Z" level=info msg="ignoring event" container=0a3266320057592f368ad3c52aba426612addb694e2e2af650e88909c7add2a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	4da301117c47a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 seconds ago        Running             busybox                   0                   f0887f262b1d2
	c78cf81da9f02       5185b96f0becf                                                                                         36 seconds ago       Running             coredns                   1                   9fa4123cb816a
	45b6781b3c7fc       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              46 seconds ago       Running             kindnet-cni               0                   d729c67799665
	3777a98837330       6e38f40d628db                                                                                         49 seconds ago       Running             storage-provisioner       0                   fab46a66ddac6
	0739052e07226       5185b96f0becf                                                                                         50 seconds ago       Exited              coredns                   0                   a0cf71fcfe550
	40f2d805fba78       46a6bb3c77ce0                                                                                         50 seconds ago       Running             kube-proxy                0                   320666295bb52
	46937fcaeefed       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   411439c0f9588
	5c5051f9acbc0       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   06ec17b1a9fc2
	6dd5e22701b0a       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   a18e7ab9864c2
	b06dd1eae15b5       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   94053a2f077b5
	
	* 
	* ==> coredns [0739052e0722] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 6913905827935292786.8584660205220874109. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 6913905827935292786.8584660205220874109. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [c78cf81da9f0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:35475 - 12806 "HINFO IN 6966946626238033810.3083996661101390669. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01412589s
	[INFO] 10.244.0.3:51576 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170312s
	[INFO] 10.244.0.3:49069 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.051019823s
	[INFO] 10.244.0.3:36866 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003480383s
	[INFO] 10.244.0.3:52548 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.011924194s
	[INFO] 10.244.0.3:57435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000172307s
	[INFO] 10.244.0.3:50752 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00487532s
	[INFO] 10.244.0.3:37941 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00016835s
	[INFO] 10.244.0.3:54364 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001313s
	[INFO] 10.244.0.3:36795 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004073706s
	[INFO] 10.244.0.3:57809 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000115928s
	[INFO] 10.244.0.3:51279 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009685s
	[INFO] 10.244.0.3:54221 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201848s
	[INFO] 10.244.0.3:37061 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132724s
	[INFO] 10.244.0.3:39696 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108898s
	[INFO] 10.244.0.3:50904 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088582s
	[INFO] 10.244.0.3:57173 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098117s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-358000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-358000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08976559d74fb9c2654733dc21cb8f9d9ec24374
	                    minikube.k8s.io/name=multinode-358000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_24T15_06_21_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 23:06:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-358000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 23:07:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 23:07:21 +0000   Fri, 24 Feb 2023 23:06:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 23:07:21 +0000   Fri, 24 Feb 2023 23:06:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 23:07:21 +0000   Fri, 24 Feb 2023 23:06:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 23:07:21 +0000   Fri, 24 Feb 2023 23:06:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-358000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    892ae553-d6f4-4035-a8a5-8b0131f3b246
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-tnqbs                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 coredns-787d4945fb-qfqth                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     52s
	  kube-system                 etcd-multinode-358000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-894f4                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      52s
	  kube-system                 kube-apiserver-multinode-358000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-controller-manager-multinode-358000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-proxy-rsf5q                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-scheduler-multinode-358000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 50s   kube-proxy       
	  Normal  Starting                 65s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  65s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  65s   kubelet          Node multinode-358000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s   kubelet          Node multinode-358000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s   kubelet          Node multinode-358000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s   node-controller  Node multinode-358000 event: Registered Node multinode-358000 in Controller
	
	
	Name:               multinode-358000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-358000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 23:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-358000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 23:07:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 23:07:05 +0000   Fri, 24 Feb 2023 23:07:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 23:07:05 +0000   Fri, 24 Feb 2023 23:07:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 23:07:05 +0000   Fri, 24 Feb 2023 23:07:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 23:07:05 +0000   Fri, 24 Feb 2023 23:07:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-358000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    892ae553-d6f4-4035-a8a5-8b0131f3b246
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-5zqv7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kindnet-5qvwr               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      20s
	  kube-system                 kube-proxy-855bv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)  kubelet          Node multinode-358000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)  kubelet          Node multinode-358000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)  kubelet          Node multinode-358000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                20s                kubelet          Node multinode-358000-m02 status is now: NodeReady
	  Normal  RegisteredNode           18s                node-controller  Node multinode-358000-m02 event: Registered Node multinode-358000-m02 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000064] FS-Cache: O-key=[8] '235dc60400000000'
	[  +0.000061] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000043] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=000000001a032d23
	[  +0.000078] FS-Cache: N-key=[8] '235dc60400000000'
	[  +0.003038] FS-Cache: Duplicate cookie detected
	[  +0.000092] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000045] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=00000000c09690c2
	[  +0.000073] FS-Cache: O-key=[8] '235dc60400000000'
	[  +0.000050] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000048] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=00000000fa0a547a
	[  +0.000052] FS-Cache: N-key=[8] '235dc60400000000'
	[  +3.553193] FS-Cache: Duplicate cookie detected
	[  +0.000091] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000052] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=0000000081c9d0cb
	[  +0.000059] FS-Cache: O-key=[8] '225dc60400000000'
	[  +0.000031] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000052] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=0000000011fb7533
	[  +0.000047] FS-Cache: N-key=[8] '225dc60400000000'
	[  +0.400852] FS-Cache: Duplicate cookie detected
	[  +0.000038] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000057] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=00000000dd227ced
	[  +0.000061] FS-Cache: O-key=[8] '2b5dc60400000000'
	[  +0.000046] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000033] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=00000000cab77509
	[  +0.000067] FS-Cache: N-key=[8] '2b5dc60400000000'
	
	* 
	* ==> etcd [46937fcaeefe] <==
	* {"level":"info","ts":"2023-02-24T23:06:15.387Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T23:06:15.387Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T23:06:15.387Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-24T23:06:16.374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-24T23:06:16.375Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-358000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T23:06:16.375Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T23:06:16.375Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T23:06:16.375Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:06:16.376Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T23:06:16.376Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T23:06:16.377Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T23:06:20.138Z","caller":"traceutil/trace.go:171","msg":"trace[637080141] transaction","detail":"{read_only:false; response_revision:217; number_of_response:1; }","duration":"115.330473ms","start":"2023-02-24T23:06:20.023Z","end":"2023-02-24T23:06:20.138Z","steps":["trace[637080141] 'process raft request'  (duration: 115.291218ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-24T23:06:20.138Z","caller":"traceutil/trace.go:171","msg":"trace[972468534] transaction","detail":"{read_only:false; response_revision:216; number_of_response:1; }","duration":"134.148297ms","start":"2023-02-24T23:06:20.004Z","end":"2023-02-24T23:06:20.138Z","steps":["trace[972468534] 'process raft request'  (duration: 95.186825ms)","trace[972468534] 'compare'  (duration: 38.392219ms)"],"step_count":2}
	{"level":"info","ts":"2023-02-24T23:06:55.420Z","caller":"traceutil/trace.go:171","msg":"trace[1680627844] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"220.920869ms","start":"2023-02-24T23:06:55.198Z","end":"2023-02-24T23:06:55.420Z","steps":["trace[1680627844] 'process raft request'  (duration: 220.813103ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-24T23:06:57.687Z","caller":"traceutil/trace.go:171","msg":"trace[1695407084] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"262.174178ms","start":"2023-02-24T23:06:57.425Z","end":"2023-02-24T23:06:57.687Z","steps":["trace[1695407084] 'process raft request'  (duration: 262.082425ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  23:07:25 up  2:06,  0 users,  load average: 1.77, 1.17, 0.88
	Linux multinode-358000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [45b6781b3c7f] <==
	* I0224 23:06:38.554773       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0224 23:06:38.554887       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0224 23:06:38.555020       1 main.go:116] setting mtu 1500 for CNI 
	I0224 23:06:38.555035       1 main.go:146] kindnetd IP family: "ipv4"
	I0224 23:06:38.555051       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0224 23:06:39.154874       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:06:39.154932       1 main.go:227] handling current node
	I0224 23:06:49.265414       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:06:49.265453       1 main.go:227] handling current node
	I0224 23:06:59.278084       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:06:59.278159       1 main.go:227] handling current node
	I0224 23:07:09.282983       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:07:09.283024       1 main.go:227] handling current node
	I0224 23:07:09.283032       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 23:07:09.283036       1 main.go:250] Node multinode-358000-m02 has CIDR [10.244.1.0/24] 
	I0224 23:07:09.283135       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0224 23:07:19.294453       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0224 23:07:19.294493       1 main.go:227] handling current node
	I0224 23:07:19.294501       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0224 23:07:19.294505       1 main.go:250] Node multinode-358000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [b06dd1eae15b] <==
	* I0224 23:06:17.509104       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 23:06:17.513317       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 23:06:17.513410       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 23:06:17.513469       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 23:06:17.513926       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 23:06:17.513972       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 23:06:17.514019       1 cache.go:39] Caches are synced for autoregister controller
	I0224 23:06:17.514064       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 23:06:17.514775       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 23:06:18.233048       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 23:06:18.418934       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0224 23:06:18.421549       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0224 23:06:18.421630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 23:06:18.855768       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 23:06:18.888482       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0224 23:06:18.967446       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0224 23:06:18.973225       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0224 23:06:18.973773       1 controller.go:615] quota admission added evaluator for: endpoints
	I0224 23:06:18.977909       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0224 23:06:19.474335       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 23:06:20.307048       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 23:06:20.316971       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0224 23:06:20.324072       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 23:06:32.782998       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0224 23:06:33.253037       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [5c5051f9acbc] <==
	* I0224 23:06:32.942422       1 shared_informer.go:280] Caches are synced for expand
	I0224 23:06:32.976915       1 shared_informer.go:280] Caches are synced for endpoint_slice_mirroring
	I0224 23:06:32.983276       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 23:06:33.028180       1 shared_informer.go:280] Caches are synced for endpoint
	I0224 23:06:33.060401       1 shared_informer.go:280] Caches are synced for resource quota
	I0224 23:06:33.100541       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0224 23:06:33.203306       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-tkkfd"
	I0224 23:06:33.209150       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-qfqth"
	I0224 23:06:33.266092       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rsf5q"
	I0224 23:06:33.271432       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-894f4"
	I0224 23:06:33.271454       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-tkkfd"
	I0224 23:06:33.371655       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 23:06:33.381422       1 event.go:294] "Event occurred" object="kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/kube-dns: endpoints \"kube-dns\" already exists"
	I0224 23:06:33.389034       1 shared_informer.go:280] Caches are synced for garbage collector
	I0224 23:06:33.389085       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	W0224 23:07:05.165358       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-358000-m02" does not exist
	I0224 23:07:05.169839       1 range_allocator.go:372] Set node multinode-358000-m02 PodCIDR to [10.244.1.0/24]
	I0224 23:07:05.172782       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5qvwr"
	I0224 23:07:05.176589       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-855bv"
	W0224 23:07:05.782830       1 topologycache.go:232] Can't get CPU or zone information for multinode-358000-m02 node
	W0224 23:07:07.783813       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-358000-m02. Assuming now as a timestamp.
	I0224 23:07:07.783966       1 event.go:294] "Event occurred" object="multinode-358000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-358000-m02 event: Registered Node multinode-358000-m02 in Controller"
	I0224 23:07:11.523322       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0224 23:07:11.570993       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-5zqv7"
	I0224 23:07:11.575925       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-tnqbs"
	
	* 
	* ==> kube-proxy [40f2d805fba7] <==
	* I0224 23:06:34.374638       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0224 23:06:34.374760       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0224 23:06:34.374802       1 server_others.go:535] "Using iptables proxy"
	I0224 23:06:34.397988       1 server_others.go:176] "Using iptables Proxier"
	I0224 23:06:34.398039       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0224 23:06:34.398047       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0224 23:06:34.398062       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0224 23:06:34.398083       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0224 23:06:34.398621       1 server.go:655] "Version info" version="v1.26.1"
	I0224 23:06:34.398676       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 23:06:34.399563       1 config.go:317] "Starting service config controller"
	I0224 23:06:34.399592       1 config.go:444] "Starting node config controller"
	I0224 23:06:34.399598       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0224 23:06:34.399594       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0224 23:06:34.399662       1 config.go:226] "Starting endpoint slice config controller"
	I0224 23:06:34.399666       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0224 23:06:34.500659       1 shared_informer.go:280] Caches are synced for node config
	I0224 23:06:34.500733       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0224 23:06:34.500754       1 shared_informer.go:280] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [6dd5e22701b0] <==
	* W0224 23:06:17.469793       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 23:06:17.469804       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 23:06:17.469847       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:17.469882       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0224 23:06:17.469956       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:17.469994       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0224 23:06:17.470030       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0224 23:06:17.470038       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0224 23:06:17.470152       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0224 23:06:17.470331       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0224 23:06:17.470310       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:17.470587       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0224 23:06:18.401705       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:18.401776       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0224 23:06:18.477493       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0224 23:06:18.477575       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0224 23:06:18.572201       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0224 23:06:18.572257       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0224 23:06:18.599464       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 23:06:18.599512       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0224 23:06:18.614617       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 23:06:18.614662       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0224 23:06:18.652944       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0224 23:06:18.653027       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0224 23:06:19.066401       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 23:06:01 UTC, end at Fri 2023-02-24 23:07:26 UTC. --
	Feb 24 23:06:34 multinode-358000 kubelet[2178]: I0224 23:06:34.789670    2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d729c67799665b2f08432f392aacc1af82748696182361f23b53e44abdfff4f9"
	Feb 24 23:06:36 multinode-358000 kubelet[2178]: I0224 23:06:36.017030    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rsf5q" podStartSLOduration=3.017002562 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:36.016827563 +0000 UTC m=+15.727492732" watchObservedRunningTime="2023-02-24 23:06:36.017002562 +0000 UTC m=+15.727667730"
	Feb 24 23:06:36 multinode-358000 kubelet[2178]: I0224 23:06:36.417413    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-tkkfd" podStartSLOduration=3.41738528 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:36.417181529 +0000 UTC m=+16.127846698" watchObservedRunningTime="2023-02-24 23:06:36.41738528 +0000 UTC m=+16.128050449"
	Feb 24 23:06:36 multinode-358000 kubelet[2178]: I0224 23:06:36.817487    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-qfqth" podStartSLOduration=3.8174574850000003 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:36.817257311 +0000 UTC m=+16.527922481" watchObservedRunningTime="2023-02-24 23:06:36.817457485 +0000 UTC m=+16.528122654"
	Feb 24 23:06:37 multinode-358000 kubelet[2178]: I0224 23:06:37.216766    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.21673015 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:37.216544795 +0000 UTC m=+16.927209960" watchObservedRunningTime="2023-02-24 23:06:37.21673015 +0000 UTC m=+16.927395310"
	Feb 24 23:06:41 multinode-358000 kubelet[2178]: I0224 23:06:41.055369    2178 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 24 23:06:41 multinode-358000 kubelet[2178]: I0224 23:06:41.056061    2178 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.706381    2178 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c9979a-3811-4b07-aa6d-4d766209627d-config-volume\") pod \"75c9979a-3811-4b07-aa6d-4d766209627d\" (UID: \"75c9979a-3811-4b07-aa6d-4d766209627d\") "
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.706498    2178 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbbs9\" (UniqueName: \"kubernetes.io/projected/75c9979a-3811-4b07-aa6d-4d766209627d-kube-api-access-nbbs9\") pod \"75c9979a-3811-4b07-aa6d-4d766209627d\" (UID: \"75c9979a-3811-4b07-aa6d-4d766209627d\") "
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: W0224 23:06:48.707141    2178 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/75c9979a-3811-4b07-aa6d-4d766209627d/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.707442    2178 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75c9979a-3811-4b07-aa6d-4d766209627d-config-volume" (OuterVolumeSpecName: "config-volume") pod "75c9979a-3811-4b07-aa6d-4d766209627d" (UID: "75c9979a-3811-4b07-aa6d-4d766209627d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.709579    2178 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c9979a-3811-4b07-aa6d-4d766209627d-kube-api-access-nbbs9" (OuterVolumeSpecName: "kube-api-access-nbbs9") pod "75c9979a-3811-4b07-aa6d-4d766209627d" (UID: "75c9979a-3811-4b07-aa6d-4d766209627d"). InnerVolumeSpecName "kube-api-access-nbbs9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.807004    2178 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-nbbs9\" (UniqueName: \"kubernetes.io/projected/75c9979a-3811-4b07-aa6d-4d766209627d-kube-api-access-nbbs9\") on node \"multinode-358000\" DevicePath \"\""
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.807109    2178 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75c9979a-3811-4b07-aa6d-4d766209627d-config-volume\") on node \"multinode-358000\" DevicePath \"\""
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.973537    2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0cf71fcfe55092a281a02d546d3a236123195e4007be424f5e9784c12f57587"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.979614    2178 scope.go:115] "RemoveContainer" containerID="91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.995214    2178 scope.go:115] "RemoveContainer" containerID="91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: E0224 23:06:48.996270    2178 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3" containerID="91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.996329    2178 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3} err="failed to get container status \"91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3\": rpc error: code = Unknown desc = Error: No such container: 91d016a2b88a80e0eefe000ab743a94d5dca2f4d36ef4647fcfd2a10002db6c3"
	Feb 24 23:06:48 multinode-358000 kubelet[2178]: I0224 23:06:48.997574    2178 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-894f4" podStartSLOduration=-9.223372020857225e+09 pod.CreationTimestamp="2023-02-24 23:06:33 +0000 UTC" firstStartedPulling="2023-02-24 23:06:34.678974996 +0000 UTC m=+14.389640156" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-24 23:06:38.874057285 +0000 UTC m=+18.584722454" watchObservedRunningTime="2023-02-24 23:06:48.997549812 +0000 UTC m=+28.708214981"
	Feb 24 23:06:50 multinode-358000 kubelet[2178]: I0224 23:06:50.493061    2178 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=75c9979a-3811-4b07-aa6d-4d766209627d path="/var/lib/kubelet/pods/75c9979a-3811-4b07-aa6d-4d766209627d/volumes"
	Feb 24 23:07:11 multinode-358000 kubelet[2178]: I0224 23:07:11.581434    2178 topology_manager.go:210] "Topology Admit Handler"
	Feb 24 23:07:11 multinode-358000 kubelet[2178]: E0224 23:07:11.581514    2178 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="75c9979a-3811-4b07-aa6d-4d766209627d" containerName="coredns"
	Feb 24 23:07:11 multinode-358000 kubelet[2178]: I0224 23:07:11.581538    2178 memory_manager.go:346] "RemoveStaleState removing state" podUID="75c9979a-3811-4b07-aa6d-4d766209627d" containerName="coredns"
	Feb 24 23:07:11 multinode-358000 kubelet[2178]: I0224 23:07:11.672771    2178 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b55l6\" (UniqueName: \"kubernetes.io/projected/e12ec6d0-ab35-4586-85c7-f1e53343d029-kube-api-access-b55l6\") pod \"busybox-6b86dd6d48-tnqbs\" (UID: \"e12ec6d0-ab35-4586-85c7-f1e53343d029\") " pod="default/busybox-6b86dd6d48-tnqbs"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-358000 -n multinode-358000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-358000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (98.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3576844124.exe start -p running-upgrade-449000 --memory=2200 --vm-driver=docker 
E0224 15:23:13.506356   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 15:23:19.334473   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:19.339636   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:19.349779   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:19.369961   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:19.410206   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:19.490588   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:19.650899   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:19.971062   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:20.611209   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:23:21.893318   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3576844124.exe start -p running-upgrade-449000 --memory=2200 --vm-driver=docker : exit status 70 (1m21.767149528s)

                                                
                                                
-- stdout --
	! [running-upgrade-449000] minikube v1.9.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3332547866
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:23:02.371350401 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-449000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:23:21.880351527 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-449000", then "minikube start -p running-upgrade-449000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 76.24 KiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.82 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.41 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 6.08 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 20.73 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.89 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.67 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 50.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 58.51 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 63.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.48 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.53 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 87.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 89.81 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 106.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 115.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 129.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 146.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 153.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 161.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 170.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 176.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 180.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 191.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 197.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 213.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 219.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 233.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 239.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 244.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 265.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 275.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 279.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 292.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 297.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 305.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 313.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 330.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 332.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay
2-amd64.tar.lz4: 332.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 342.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 355.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 357.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-over
lay2-amd64.tar.lz4: 359.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 365.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 371.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 375.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 378.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 385.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 388.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 393.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-o
verlay2-amd64.tar.lz4: 398.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 414.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 415.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 419.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 438.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docke
r-overlay2-amd64.tar.lz4: 443.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 445.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 451.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 454.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 464.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 468.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 478.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.47 MiB    > preloaded-images-k8s-v2-v1.18.0-do
cker-overlay2-amd64.tar.lz4: 488.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 495.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 501.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 517.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 519.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 523.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 525.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.50 MiB    > preloaded-images-k8s-v2-v1.18.0
-docker-overlay2-amd64.tar.lz4: 534.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 539.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:23:21.880351527 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
E0224 15:23:24.455621   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3576844124.exe start -p running-upgrade-449000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3576844124.exe start -p running-upgrade-449000 --memory=2200 --vm-driver=docker : exit status 70 (3.349387892s)

                                                
                                                
-- stdout --
	* [running-upgrade-449000] minikube v1.9.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3321088387
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-449000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3576844124.exe start -p running-upgrade-449000 --memory=2200 --vm-driver=docker 
E0224 15:23:29.576125   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3576844124.exe start -p running-upgrade-449000 --memory=2200 --vm-driver=docker : exit status 70 (4.395186774s)

                                                
                                                
-- stdout --
	* [running-upgrade-449000] minikube v1.9.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1986668082
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-449000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-02-24 15:23:33.947825 -0800 PST m=+2566.643845353
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-449000
helpers_test.go:235: (dbg) docker inspect running-upgrade-449000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad23c2f50fc4e7c01d5c088f29908f664d3284b4ee06d4fd9459508385f1b5d5",
	        "Created": "2023-02-24T23:23:10.615672088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 555387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:23:10.847275257Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/ad23c2f50fc4e7c01d5c088f29908f664d3284b4ee06d4fd9459508385f1b5d5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad23c2f50fc4e7c01d5c088f29908f664d3284b4ee06d4fd9459508385f1b5d5/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad23c2f50fc4e7c01d5c088f29908f664d3284b4ee06d4fd9459508385f1b5d5/hosts",
	        "LogPath": "/var/lib/docker/containers/ad23c2f50fc4e7c01d5c088f29908f664d3284b4ee06d4fd9459508385f1b5d5/ad23c2f50fc4e7c01d5c088f29908f664d3284b4ee06d4fd9459508385f1b5d5-json.log",
	        "Name": "/running-upgrade-449000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-449000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9a87c24820193431683837a6e9a2f0642eb678a0a7f169f165646ec443668918-init/diff:/var/lib/docker/overlay2/c13d67ee259a145834be131d120f57a8b74b26f03264cbaa49a6e7f89c2695ea/diff:/var/lib/docker/overlay2/7aae925b332d02c79f368825822f5c8d6b8c82e1f82ded62b82e4e2aeef670bd/diff:/var/lib/docker/overlay2/7b4a4d59152394c69b456d14f028c31329891e6f604acbd7e712d9546261d2e4/diff:/var/lib/docker/overlay2/2aece4b18a46f3ca6fdf10cec972d712837ccf924a91645bc2b42b60dca891ab/diff:/var/lib/docker/overlay2/8308500ba2e3166db5789fd9526626bfa28ea6618735de4a023b242fe6c5d9e9/diff:/var/lib/docker/overlay2/57c2c56bd4013f092332d4f267fd259293e918d12beabad8147b8c31a4095c4c/diff:/var/lib/docker/overlay2/6e19fdf7d724140c232bc24d73d7ba4a37cc8e9416280d33565adf5cc6863599/diff:/var/lib/docker/overlay2/bacc5d4bb78fb84890f2e628a25ba01772950d6298f93abce799ea6ccaafa167/diff:/var/lib/docker/overlay2/0c23a7f22bbb1a1577e622874447b59217772d1322184866f058b6a4ee593c0f/diff:/var/lib/docker/overlay2/e69b5d
b0926c48fca036abe9031096467369444e9a8247be4a9d4e60ab8d3f59/diff:/var/lib/docker/overlay2/d5f3d88881cf71cb07a50061bb950cac2afeb9f8132ef4e5c9a16d67c0818fdc/diff:/var/lib/docker/overlay2/3bd4fab84ff9d15eab75f77ef4283da0755d5424845045488786038fbf03f213/diff:/var/lib/docker/overlay2/6393d88f777bd1f782a595e004a2f7d6650a32225d196691fe0884c1ae396ffa/diff:/var/lib/docker/overlay2/c7983a89021b05ace00f6872220a4e6af305227df2de1b4f5d82436fb94f59a9/diff:/var/lib/docker/overlay2/5fb749c964bbe3fc186ca9fa17a5505c2448e1c0a1ab5727dc45b0132354445e/diff:/var/lib/docker/overlay2/9a3daa91e271a19f83c03847aefb1b63815ba6aa6150b5700b8b91505bb88471/diff:/var/lib/docker/overlay2/b324c9cb70f4af14ef9f3c912de478d470138826674d95b4de56854729d609a1/diff:/var/lib/docker/overlay2/ad8d95b3d98fdfd627dfb8d141a822d6089a95aeb7bb350ddba19bd064f344be/diff:/var/lib/docker/overlay2/2e8292cf3d7ed7c67dea80ddd66cb9e05109c4d3c9ba81800db67b4150e91294/diff:/var/lib/docker/overlay2/6ccba9f2d78485aaead12ebf34a707c82af9172224a9b45273f12c86e0a8559d/diff:/var/lib/d
ocker/overlay2/9388ff11ba9171b0d512e7500a2e393d19b9c51f4dc181220daee728bd0452c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a87c24820193431683837a6e9a2f0642eb678a0a7f169f165646ec443668918/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a87c24820193431683837a6e9a2f0642eb678a0a7f169f165646ec443668918/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a87c24820193431683837a6e9a2f0642eb678a0a7f169f165646ec443668918/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-449000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-449000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-449000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-449000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-449000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c4222bf13697c62c4a8811cb7de9a011a36651109812358c396bcb2c1d37299",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59396"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59397"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59395"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8c4222bf1369",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "83f98b356828bc33801d68db77500c6c3334db0df2bf7fae5201c44e05fe896b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "6f94c1d740c064c2ad9e97d1a8e110ee01e0317576244f5dcb4130ae7c7f6f60",
	                    "EndpointID": "83f98b356828bc33801d68db77500c6c3334db0df2bf7fae5201c44e05fe896b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-449000 -n running-upgrade-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-449000 -n running-upgrade-449000: exit status 6 (381.572345ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:23:34.376263   37787 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-449000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-449000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-449000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-449000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-449000: (2.316698129s)
--- FAIL: TestRunningBinaryUpgrade (98.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (557.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0224 15:25:10.456153   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.606994788s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-122000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-122000 in cluster kubernetes-upgrade-122000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 15:24:48.793155   38236 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:24:48.793688   38236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:24:48.793694   38236 out.go:309] Setting ErrFile to fd 2...
	I0224 15:24:48.793698   38236 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:24:48.793889   38236 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:24:48.795607   38236 out.go:303] Setting JSON to false
	I0224 15:24:48.814784   38236 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8662,"bootTime":1677272426,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 15:24:48.814904   38236 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 15:24:48.836203   38236 out.go:177] * [kubernetes-upgrade-122000] minikube v1.29.0 on Darwin 13.2.1
	I0224 15:24:48.878324   38236 notify.go:220] Checking for updates...
	I0224 15:24:48.899093   38236 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 15:24:48.920355   38236 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:24:48.941280   38236 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 15:24:48.962189   38236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 15:24:48.983296   38236 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 15:24:49.004358   38236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 15:24:49.025587   38236 config.go:182] Loaded profile config "cert-expiration-713000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:24:49.025663   38236 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 15:24:49.090053   38236 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 15:24:49.090179   38236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:24:49.233784   38236 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 23:24:49.139448538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:24:49.255717   38236 out.go:177] * Using the docker driver based on user configuration
	I0224 15:24:49.292620   38236 start.go:296] selected driver: docker
	I0224 15:24:49.292691   38236 start.go:857] validating driver "docker" against <nil>
	I0224 15:24:49.292714   38236 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 15:24:49.296928   38236 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:24:49.454547   38236 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 23:24:49.356744893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:24:49.454669   38236 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 15:24:49.454861   38236 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 15:24:49.477987   38236 out.go:177] * Using Docker Desktop driver with root privileges
	I0224 15:24:49.497684   38236 cni.go:84] Creating CNI manager for ""
	I0224 15:24:49.497708   38236 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 15:24:49.497718   38236 start_flags.go:319] config:
	{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:24:49.555885   38236 out.go:177] * Starting control plane node kubernetes-upgrade-122000 in cluster kubernetes-upgrade-122000
	I0224 15:24:49.594117   38236 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:24:49.631835   38236 out.go:177] * Pulling base image ...
	I0224 15:24:49.669075   38236 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 15:24:49.669150   38236 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:24:49.669190   38236 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 15:24:49.669220   38236 cache.go:57] Caching tarball of preloaded images
	I0224 15:24:49.669482   38236 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:24:49.669506   38236 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0224 15:24:49.670466   38236 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/config.json ...
	I0224 15:24:49.670628   38236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/config.json: {Name:mk77ca30467ec957c96ae29729d1ae07977f1b62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:24:49.729367   38236 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:24:49.729395   38236 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:24:49.729412   38236 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:24:49.729458   38236 start.go:364] acquiring machines lock for kubernetes-upgrade-122000: {Name:mk652cd91b310ddade995e61f87af59023b8312d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:24:49.729616   38236 start.go:368] acquired machines lock for "kubernetes-upgrade-122000" in 146.567µs
	I0224 15:24:49.729649   38236 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-122000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 15:24:49.729738   38236 start.go:125] createHost starting for "" (driver="docker")
	I0224 15:24:49.788057   38236 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 15:24:49.788289   38236 start.go:159] libmachine.API.Create for "kubernetes-upgrade-122000" (driver="docker")
	I0224 15:24:49.788314   38236 client.go:168] LocalClient.Create starting
	I0224 15:24:49.788454   38236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
	I0224 15:24:49.788502   38236 main.go:141] libmachine: Decoding PEM data...
	I0224 15:24:49.788520   38236 main.go:141] libmachine: Parsing certificate...
	I0224 15:24:49.788593   38236 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
	I0224 15:24:49.788627   38236 main.go:141] libmachine: Decoding PEM data...
	I0224 15:24:49.788635   38236 main.go:141] libmachine: Parsing certificate...
	I0224 15:24:49.789064   38236 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-122000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 15:24:49.846201   38236 cli_runner.go:211] docker network inspect kubernetes-upgrade-122000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 15:24:49.846311   38236 network_create.go:281] running [docker network inspect kubernetes-upgrade-122000] to gather additional debugging logs...
	I0224 15:24:49.846328   38236 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-122000
	W0224 15:24:49.904400   38236 cli_runner.go:211] docker network inspect kubernetes-upgrade-122000 returned with exit code 1
	I0224 15:24:49.904427   38236 network_create.go:284] error running [docker network inspect kubernetes-upgrade-122000]: docker network inspect kubernetes-upgrade-122000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-122000
	I0224 15:24:49.904442   38236 network_create.go:286] output of [docker network inspect kubernetes-upgrade-122000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-122000
	
	** /stderr **
	I0224 15:24:49.904547   38236 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 15:24:49.967074   38236 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 15:24:49.967430   38236 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00117e030}
	I0224 15:24:49.967449   38236 network_create.go:123] attempt to create docker network kubernetes-upgrade-122000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0224 15:24:49.967530   38236 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-122000 kubernetes-upgrade-122000
	W0224 15:24:50.024024   38236 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-122000 kubernetes-upgrade-122000 returned with exit code 1
	W0224 15:24:50.024061   38236 network_create.go:148] failed to create docker network kubernetes-upgrade-122000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-122000 kubernetes-upgrade-122000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0224 15:24:50.024077   38236 network_create.go:115] failed to create docker network kubernetes-upgrade-122000 192.168.58.0/24, will retry: subnet is taken
	I0224 15:24:50.025496   38236 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 15:24:50.025838   38236 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0008cb7b0}
	I0224 15:24:50.025852   38236 network_create.go:123] attempt to create docker network kubernetes-upgrade-122000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0224 15:24:50.025923   38236 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-122000 kubernetes-upgrade-122000
	I0224 15:24:50.121191   38236 network_create.go:107] docker network kubernetes-upgrade-122000 192.168.67.0/24 created
	I0224 15:24:50.121241   38236 kic.go:117] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-122000" container
	I0224 15:24:50.121363   38236 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 15:24:50.179465   38236 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-122000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-122000 --label created_by.minikube.sigs.k8s.io=true
	I0224 15:24:50.235852   38236 oci.go:103] Successfully created a docker volume kubernetes-upgrade-122000
	I0224 15:24:50.235977   38236 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-122000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-122000 --entrypoint /usr/bin/test -v kubernetes-upgrade-122000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 15:24:50.712182   38236 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-122000
	I0224 15:24:50.712232   38236 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 15:24:50.712246   38236 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 15:24:50.712353   38236 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-122000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 15:24:56.946724   38236 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-122000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.234084336s)
	I0224 15:24:56.946746   38236 kic.go:199] duration metric: took 6.234321 seconds to extract preloaded images to volume
	I0224 15:24:56.946867   38236 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 15:24:57.087945   38236 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-122000 --name kubernetes-upgrade-122000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-122000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-122000 --network kubernetes-upgrade-122000 --ip 192.168.67.2 --volume kubernetes-upgrade-122000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 15:24:57.454700   38236 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-122000 --format={{.State.Running}}
	I0224 15:24:57.515734   38236 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-122000 --format={{.State.Status}}
	I0224 15:24:57.577320   38236 cli_runner.go:164] Run: docker exec kubernetes-upgrade-122000 stat /var/lib/dpkg/alternatives/iptables
	I0224 15:24:57.686868   38236 oci.go:144] the created container "kubernetes-upgrade-122000" has a running status.
	I0224 15:24:57.686908   38236 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa...
	I0224 15:24:57.789449   38236 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 15:24:57.893862   38236 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-122000 --format={{.State.Status}}
	I0224 15:24:57.957155   38236 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 15:24:57.957175   38236 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-122000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 15:24:58.060464   38236 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-122000 --format={{.State.Status}}
	I0224 15:24:58.116827   38236 machine.go:88] provisioning docker machine ...
	I0224 15:24:58.116866   38236 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-122000"
	I0224 15:24:58.117090   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:24:58.175218   38236 main.go:141] libmachine: Using SSH client type: native
	I0224 15:24:58.175602   38236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59525 <nil> <nil>}
	I0224 15:24:58.175616   38236 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-122000 && echo "kubernetes-upgrade-122000" | sudo tee /etc/hostname
	I0224 15:24:58.321145   38236 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-122000
	
	I0224 15:24:58.321236   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:24:58.378986   38236 main.go:141] libmachine: Using SSH client type: native
	I0224 15:24:58.379336   38236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59525 <nil> <nil>}
	I0224 15:24:58.379350   38236 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-122000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-122000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-122000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:24:58.514613   38236 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:24:58.514634   38236 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:24:58.514650   38236 ubuntu.go:177] setting up certificates
	I0224 15:24:58.514657   38236 provision.go:83] configureAuth start
	I0224 15:24:58.514740   38236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-122000
	I0224 15:24:58.571662   38236 provision.go:138] copyHostCerts
	I0224 15:24:58.571758   38236 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:24:58.571768   38236 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:24:58.571890   38236 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:24:58.572095   38236 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:24:58.572102   38236 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:24:58.572174   38236 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:24:58.572313   38236 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:24:58.572318   38236 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:24:58.572389   38236 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:24:58.572503   38236 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-122000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-122000]
	I0224 15:24:58.651394   38236 provision.go:172] copyRemoteCerts
	I0224 15:24:58.651454   38236 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:24:58.651517   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:24:58.708344   38236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59525 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:24:58.804043   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0224 15:24:58.834595   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 15:24:58.851994   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:24:58.869429   38236 provision.go:86] duration metric: configureAuth took 354.7503ms
	I0224 15:24:58.869442   38236 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:24:58.869580   38236 config.go:182] Loaded profile config "kubernetes-upgrade-122000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0224 15:24:58.869650   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:24:58.928706   38236 main.go:141] libmachine: Using SSH client type: native
	I0224 15:24:58.929057   38236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59525 <nil> <nil>}
	I0224 15:24:58.929073   38236 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:24:59.065987   38236 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:24:59.066003   38236 ubuntu.go:71] root file system type: overlay
	I0224 15:24:59.066096   38236 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:24:59.066180   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:24:59.125640   38236 main.go:141] libmachine: Using SSH client type: native
	I0224 15:24:59.125979   38236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59525 <nil> <nil>}
	I0224 15:24:59.126028   38236 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:24:59.269712   38236 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:24:59.269834   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:24:59.327822   38236 main.go:141] libmachine: Using SSH client type: native
	I0224 15:24:59.328333   38236 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59525 <nil> <nil>}
	I0224 15:24:59.328349   38236 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:24:59.955980   38236 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:24:59.266162222 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 15:24:59.956005   38236 machine.go:91] provisioned docker machine in 1.839107068s
	I0224 15:24:59.956011   38236 client.go:171] LocalClient.Create took 10.16740093s
	I0224 15:24:59.956036   38236 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-122000" took 10.167455534s
	I0224 15:24:59.956046   38236 start.go:300] post-start starting for "kubernetes-upgrade-122000" (driver="docker")
	I0224 15:24:59.956052   38236 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:24:59.956140   38236 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:24:59.956194   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:25:00.017830   38236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59525 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:25:00.114309   38236 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:25:00.118031   38236 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:25:00.118050   38236 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:25:00.118057   38236 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:25:00.118062   38236 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:25:00.118073   38236 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:25:00.118167   38236 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:25:00.118340   38236 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:25:00.118537   38236 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:25:00.125916   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:25:00.143258   38236 start.go:303] post-start completed in 187.196107ms
	I0224 15:25:00.143794   38236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-122000
	I0224 15:25:00.200450   38236 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/config.json ...
	I0224 15:25:00.200880   38236 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:25:00.200944   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:25:00.258047   38236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59525 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:25:00.351340   38236 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:25:00.356018   38236 start.go:128] duration metric: createHost completed in 10.625967569s
	I0224 15:25:00.356034   38236 start.go:83] releasing machines lock for "kubernetes-upgrade-122000", held for 10.626105311s
	I0224 15:25:00.356164   38236 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-122000
	I0224 15:25:00.415072   38236 ssh_runner.go:195] Run: cat /version.json
	I0224 15:25:00.415087   38236 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0224 15:25:00.415149   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:25:00.415168   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:25:00.474805   38236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59525 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:25:00.475083   38236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59525 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:25:00.866820   38236 ssh_runner.go:195] Run: systemctl --version
	I0224 15:25:00.871736   38236 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 15:25:00.876620   38236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 15:25:00.896613   38236 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 15:25:00.896688   38236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0224 15:25:00.910726   38236 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0224 15:25:00.918737   38236 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 15:25:00.918752   38236 start.go:485] detecting cgroup driver to use...
	I0224 15:25:00.918763   38236 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:25:00.918865   38236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:25:00.932278   38236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0224 15:25:00.941266   38236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:25:00.949950   38236 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:25:00.950016   38236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:25:00.958525   38236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:25:00.967057   38236 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:25:00.975464   38236 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:25:00.983958   38236 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:25:00.991834   38236 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:25:01.000271   38236 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:25:01.007574   38236 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:25:01.014713   38236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:25:01.081217   38236 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:25:01.153302   38236 start.go:485] detecting cgroup driver to use...
	I0224 15:25:01.153325   38236 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:25:01.153401   38236 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:25:01.163558   38236 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:25:01.163623   38236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:25:01.173592   38236 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:25:01.188001   38236 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:25:01.294613   38236 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:25:01.388372   38236 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:25:01.388393   38236 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:25:01.401934   38236 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:25:01.480690   38236 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:25:01.721311   38236 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:25:01.748987   38236 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:25:01.815898   38236 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0224 15:25:01.816030   38236 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-122000 dig +short host.docker.internal
	I0224 15:25:01.941570   38236 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:25:01.941694   38236 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:25:01.946174   38236 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:25:01.956401   38236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:25:02.015196   38236 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 15:25:02.015280   38236 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:25:02.036503   38236 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0224 15:25:02.036522   38236 docker.go:560] Images already preloaded, skipping extraction
	I0224 15:25:02.036604   38236 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:25:02.056238   38236 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0224 15:25:02.056253   38236 cache_images.go:84] Images are preloaded, skipping loading
	I0224 15:25:02.056339   38236 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:25:02.083289   38236 cni.go:84] Creating CNI manager for ""
	I0224 15:25:02.083306   38236 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 15:25:02.083320   38236 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:25:02.083335   38236 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-122000 NodeName:kubernetes-upgrade-122000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:25:02.083454   38236 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-122000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-122000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:25:02.083531   38236 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-122000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:25:02.083602   38236 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0224 15:25:02.091678   38236 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:25:02.091739   38236 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 15:25:02.099226   38236 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0224 15:25:02.111992   38236 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:25:02.125033   38236 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0224 15:25:02.138183   38236 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:25:02.142042   38236 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:25:02.151924   38236 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000 for IP: 192.168.67.2
	I0224 15:25:02.151942   38236 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:25:02.152127   38236 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:25:02.152191   38236 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:25:02.152233   38236 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.key
	I0224 15:25:02.152249   38236 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.crt with IP's: []
	I0224 15:25:02.259228   38236 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.crt ...
	I0224 15:25:02.259238   38236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.crt: {Name:mkab9e4571e151c55126cffa3bb24be08a0b4074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:25:02.259562   38236 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.key ...
	I0224 15:25:02.259569   38236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.key: {Name:mk15773cca57b14e44fab37133147c77e637b756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:25:02.259761   38236 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.key.c7fa3a9e
	I0224 15:25:02.259775   38236 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 15:25:02.383831   38236 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.crt.c7fa3a9e ...
	I0224 15:25:02.383845   38236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.crt.c7fa3a9e: {Name:mkfde6d822cd14e97b02ff422c6d6eabb3ab7756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:25:02.384129   38236 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.key.c7fa3a9e ...
	I0224 15:25:02.384137   38236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.key.c7fa3a9e: {Name:mk893e3e8f44cfa1f2bc2dd64a07e47094191e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:25:02.384328   38236 certs.go:333] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.crt
	I0224 15:25:02.384517   38236 certs.go:337] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.key
	I0224 15:25:02.384692   38236 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.key
	I0224 15:25:02.384706   38236 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.crt with IP's: []
	I0224 15:25:02.624110   38236 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.crt ...
	I0224 15:25:02.624127   38236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.crt: {Name:mka005efc891fdfddcb1475d17f0cd341e3ef2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:25:02.624419   38236 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.key ...
	I0224 15:25:02.624426   38236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.key: {Name:mkcd5b853bdc418aa49ee560e0bce8af8780f085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:25:02.624807   38236 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:25:02.624856   38236 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:25:02.624866   38236 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:25:02.624902   38236 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:25:02.624934   38236 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:25:02.624965   38236 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:25:02.625031   38236 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:25:02.625520   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 15:25:02.644341   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 15:25:02.661890   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 15:25:02.679387   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 15:25:02.696672   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:25:02.714273   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:25:02.732075   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:25:02.749555   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:25:02.767412   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:25:02.784985   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:25:02.802486   38236 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:25:02.820077   38236 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 15:25:02.833289   38236 ssh_runner.go:195] Run: openssl version
	I0224 15:25:02.839273   38236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:25:02.847673   38236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:25:02.851660   38236 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:25:02.851710   38236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:25:02.857270   38236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:25:02.865752   38236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:25:02.874272   38236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:25:02.878421   38236 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:25:02.878475   38236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:25:02.884189   38236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:25:02.892580   38236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:25:02.900959   38236 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:25:02.905251   38236 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:25:02.905299   38236 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:25:02.910728   38236 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:25:02.918980   38236 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP:}
	I0224 15:25:02.919085   38236 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:25:02.938460   38236 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 15:25:02.946587   38236 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:25:02.954223   38236 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:25:02.954275   38236 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:25:02.961946   38236 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:25:02.961977   38236 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:25:03.010838   38236 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0224 15:25:03.010884   38236 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:25:03.187991   38236 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:25:03.188102   38236 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:25:03.188232   38236 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:25:03.363240   38236 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:25:03.364033   38236 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:25:03.371038   38236 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0224 15:25:03.447644   38236 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:25:03.473149   38236 out.go:204]   - Generating certificates and keys ...
	I0224 15:25:03.473281   38236 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:25:03.473362   38236 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:25:03.594008   38236 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 15:25:03.747934   38236 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 15:25:03.823819   38236 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 15:25:03.966113   38236 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 15:25:04.036559   38236 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 15:25:04.036855   38236 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-122000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0224 15:25:04.139246   38236 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 15:25:04.139444   38236 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-122000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0224 15:25:04.361212   38236 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 15:25:04.464652   38236 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 15:25:04.638413   38236 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 15:25:04.638509   38236 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:25:04.733980   38236 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:25:04.861843   38236 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:25:05.093081   38236 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:25:05.215263   38236 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:25:05.215767   38236 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:25:05.237332   38236 out.go:204]   - Booting up control plane ...
	I0224 15:25:05.237429   38236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:25:05.237522   38236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:25:05.237605   38236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:25:05.237672   38236 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:25:05.237849   38236 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:25:45.226328   38236 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0224 15:25:45.227096   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:25:45.227292   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:25:50.228369   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:25:50.228537   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:26:00.229761   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:26:00.230037   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:26:20.231483   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:26:20.231667   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:27:00.233838   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:27:00.234027   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:27:00.234037   38236 kubeadm.go:322] 
	I0224 15:27:00.234064   38236 kubeadm.go:322] Unfortunately, an error has occurred:
	I0224 15:27:00.234097   38236 kubeadm.go:322] 	timed out waiting for the condition
	I0224 15:27:00.234105   38236 kubeadm.go:322] 
	I0224 15:27:00.234142   38236 kubeadm.go:322] This error is likely caused by:
	I0224 15:27:00.234172   38236 kubeadm.go:322] 	- The kubelet is not running
	I0224 15:27:00.234257   38236 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 15:27:00.234270   38236 kubeadm.go:322] 
	I0224 15:27:00.234377   38236 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 15:27:00.234426   38236 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0224 15:27:00.234467   38236 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0224 15:27:00.234481   38236 kubeadm.go:322] 
	I0224 15:27:00.234582   38236 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 15:27:00.234663   38236 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0224 15:27:00.234774   38236 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0224 15:27:00.234829   38236 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0224 15:27:00.234890   38236 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0224 15:27:00.234920   38236 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0224 15:27:00.238080   38236 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 15:27:00.238167   38236 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 15:27:00.238276   38236 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0224 15:27:00.238390   38236 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:27:00.238484   38236 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 15:27:00.238552   38236 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0224 15:27:00.238734   38236 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-122000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-122000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-122000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-122000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 15:27:00.238762   38236 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0224 15:27:00.653116   38236 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:27:00.663434   38236 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:27:00.663550   38236 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:27:00.671956   38236 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:27:00.671982   38236 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:27:00.721010   38236 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0224 15:27:00.721050   38236 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:27:00.900517   38236 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:27:00.900586   38236 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:27:00.900678   38236 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:27:01.067235   38236 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:27:01.068203   38236 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:27:01.074927   38236 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0224 15:27:01.143778   38236 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:27:01.186189   38236 out.go:204]   - Generating certificates and keys ...
	I0224 15:27:01.186336   38236 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:27:01.186474   38236 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:27:01.186576   38236 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 15:27:01.186670   38236 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0224 15:27:01.186835   38236 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 15:27:01.186885   38236 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0224 15:27:01.186989   38236 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0224 15:27:01.187090   38236 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0224 15:27:01.187189   38236 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 15:27:01.187275   38236 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 15:27:01.187331   38236 kubeadm.go:322] [certs] Using the existing "sa" key
	I0224 15:27:01.187421   38236 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:27:01.233742   38236 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:27:01.341054   38236 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:27:01.539414   38236 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:27:01.755487   38236 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:27:01.756638   38236 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:27:01.778208   38236 out.go:204]   - Booting up control plane ...
	I0224 15:27:01.778320   38236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:27:01.778402   38236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:27:01.778457   38236 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:27:01.778521   38236 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:27:01.778636   38236 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:27:41.767240   38236 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0224 15:27:41.768031   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:27:41.768256   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:27:46.768260   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:27:46.768436   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:27:56.770788   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:27:56.771024   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:28:16.771697   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:28:16.771867   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:28:56.773661   38236 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:28:56.773857   38236 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:28:56.773872   38236 kubeadm.go:322] 
	I0224 15:28:56.773904   38236 kubeadm.go:322] Unfortunately, an error has occurred:
	I0224 15:28:56.773930   38236 kubeadm.go:322] 	timed out waiting for the condition
	I0224 15:28:56.773935   38236 kubeadm.go:322] 
	I0224 15:28:56.773959   38236 kubeadm.go:322] This error is likely caused by:
	I0224 15:28:56.773978   38236 kubeadm.go:322] 	- The kubelet is not running
	I0224 15:28:56.774094   38236 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 15:28:56.774129   38236 kubeadm.go:322] 
	I0224 15:28:56.774221   38236 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 15:28:56.774259   38236 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0224 15:28:56.774291   38236 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0224 15:28:56.774298   38236 kubeadm.go:322] 
	I0224 15:28:56.774381   38236 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 15:28:56.774461   38236 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0224 15:28:56.774562   38236 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0224 15:28:56.774601   38236 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0224 15:28:56.774662   38236 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0224 15:28:56.774685   38236 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0224 15:28:56.777935   38236 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 15:28:56.778001   38236 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 15:28:56.778113   38236 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0224 15:28:56.778190   38236 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:28:56.778255   38236 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 15:28:56.778313   38236 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0224 15:28:56.778332   38236 kubeadm.go:403] StartCluster complete in 3m53.852658534s
	I0224 15:28:56.778428   38236 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:28:56.800332   38236 logs.go:277] 0 containers: []
	W0224 15:28:56.800347   38236 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:28:56.800419   38236 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:28:56.821666   38236 logs.go:277] 0 containers: []
	W0224 15:28:56.821678   38236 logs.go:279] No container was found matching "etcd"
	I0224 15:28:56.821752   38236 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:28:56.843317   38236 logs.go:277] 0 containers: []
	W0224 15:28:56.843330   38236 logs.go:279] No container was found matching "coredns"
	I0224 15:28:56.843415   38236 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:28:56.866680   38236 logs.go:277] 0 containers: []
	W0224 15:28:56.866697   38236 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:28:56.866772   38236 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:28:56.889629   38236 logs.go:277] 0 containers: []
	W0224 15:28:56.889642   38236 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:28:56.889708   38236 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:28:56.914100   38236 logs.go:277] 0 containers: []
	W0224 15:28:56.914119   38236 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:28:56.914225   38236 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:28:56.936418   38236 logs.go:277] 0 containers: []
	W0224 15:28:56.936437   38236 logs.go:279] No container was found matching "kindnet"
	I0224 15:28:56.936447   38236 logs.go:123] Gathering logs for kubelet ...
	I0224 15:28:56.936456   38236 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:28:56.983744   38236 logs.go:123] Gathering logs for dmesg ...
	I0224 15:28:56.983764   38236 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:28:56.999437   38236 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:28:56.999454   38236 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:28:57.064498   38236 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:28:57.064515   38236 logs.go:123] Gathering logs for Docker ...
	I0224 15:28:57.064525   38236 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:28:57.092584   38236 logs.go:123] Gathering logs for container status ...
	I0224 15:28:57.092606   38236 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:28:59.152851   38236 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060170578s)
	W0224 15:28:59.153008   38236 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 15:28:59.153030   38236 out.go:239] * 
	* 
	W0224 15:28:59.153182   38236 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 15:28:59.153203   38236 out.go:239] * 
	* 
	W0224 15:28:59.154055   38236 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 15:28:59.216301   38236 out.go:177] 
	W0224 15:28:59.258354   38236 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 15:28:59.258440   38236 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 15:28:59.258587   38236 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 15:28:59.300259   38236 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-122000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-122000: (1.711458519s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-122000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-122000 status --format={{.Host}}: exit status 7 (101.852761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (4m38.406590328s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-122000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (787.86287ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-122000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-122000
	    minikube start -p kubernetes-upgrade-122000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1220002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-122000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (19.465320346s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-02-24 15:33:59.882492 -0800 PST m=+3192.612290342
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-122000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-122000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5b346ce4f2d99c01106786bd4e66e2eda43b89f772ad9e8d86f5d46dfb3469cf",
	        "Created": "2023-02-24T23:24:57.14074926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 582183,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:29:02.653646677Z",
	            "FinishedAt": "2023-02-24T23:28:59.912048833Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5b346ce4f2d99c01106786bd4e66e2eda43b89f772ad9e8d86f5d46dfb3469cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5b346ce4f2d99c01106786bd4e66e2eda43b89f772ad9e8d86f5d46dfb3469cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/5b346ce4f2d99c01106786bd4e66e2eda43b89f772ad9e8d86f5d46dfb3469cf/hosts",
	        "LogPath": "/var/lib/docker/containers/5b346ce4f2d99c01106786bd4e66e2eda43b89f772ad9e8d86f5d46dfb3469cf/5b346ce4f2d99c01106786bd4e66e2eda43b89f772ad9e8d86f5d46dfb3469cf-json.log",
	        "Name": "/kubernetes-upgrade-122000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-122000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-122000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/237d626aa44764ab6da21eaaf8129ae310ea4506c962c9bbb7d86e0fb58ab2c5-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/237d626aa44764ab6da21eaaf8129ae310ea4506c962c9bbb7d86e0fb58ab2c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/237d626aa44764ab6da21eaaf8129ae310ea4506c962c9bbb7d86e0fb58ab2c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/237d626aa44764ab6da21eaaf8129ae310ea4506c962c9bbb7d86e0fb58ab2c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-122000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-122000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-122000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-122000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-122000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d41fb541a686349a1cdf28f3010c390a614941b583cf68caad1c99a7a6c88960",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59773"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59774"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59775"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59771"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59772"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d41fb541a686",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-122000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5b346ce4f2d9",
	                        "kubernetes-upgrade-122000"
	                    ],
	                    "NetworkID": "f7bda63bd1ef54002694cab43d149f0dd0cfabce7b772ce4c64c36ac96ea9dea",
	                    "EndpointID": "49fd7fe7ab3abc665671d4a70d15377d90e2d379e15d3e64e1ea374347231b1c",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-122000 -n kubernetes-upgrade-122000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-122000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-122000 logs -n 25: (2.450702518s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo cat                           | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo cat                           | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo cat                           | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo docker                        | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo cat                           | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo cat                           | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo cat                           | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo cat                           | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo                               | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo find                          | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-416000 sudo crio                          | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p flannel-416000                                    | flannel-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	| start   | -p kindnet-416000                                    | kindnet-416000            | jenkins | v1.29.0 | 24 Feb 23 15:33 PST |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker                        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-122000                         | kubernetes-upgrade-122000 | jenkins | v1.29.0 | 24 Feb 23 15:33 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-122000                         | kubernetes-upgrade-122000 | jenkins | v1.29.0 | 24 Feb 23 15:33 PST | 24 Feb 23 15:33 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 15:33:40
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 15:33:40.513351   41177 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:33:40.513590   41177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:33:40.513595   41177 out.go:309] Setting ErrFile to fd 2...
	I0224 15:33:40.513599   41177 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:33:40.513707   41177 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:33:40.515498   41177 out.go:303] Setting JSON to false
	I0224 15:33:40.540829   41177 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9194,"bootTime":1677272426,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 15:33:40.540947   41177 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 15:33:40.561970   41177 out.go:177] * [kubernetes-upgrade-122000] minikube v1.29.0 on Darwin 13.2.1
	I0224 15:33:40.620031   41177 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 15:33:40.583076   41177 notify.go:220] Checking for updates...
	I0224 15:33:40.664060   41177 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:33:40.685116   41177 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 15:33:40.743149   41177 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 15:33:40.801801   41177 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 15:33:40.823150   41177 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 15:33:40.492229   41047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:33:40.570246   41047 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 15:33:40.636235   41047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:33:40.705369   41047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:33:40.772380   41047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 15:33:40.790042   41047 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 15:33:40.790122   41047 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 15:33:40.794794   41047 start.go:553] Will wait 60s for crictl version
	I0224 15:33:40.794845   41047 ssh_runner.go:195] Run: which crictl
	I0224 15:33:40.799011   41047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 15:33:40.904839   41047 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 15:33:40.904920   41047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:33:40.935782   41047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:33:40.844436   41177 config.go:182] Loaded profile config "kubernetes-upgrade-122000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:33:40.844792   41177 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 15:33:40.914785   41177 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 15:33:40.914914   41177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:33:41.097300   41177 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:false NGoroutines:61 SystemTime:2023-02-24 23:33:40.990441317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:33:41.119225   41177 out.go:177] * Using the docker driver based on existing profile
	I0224 15:33:41.160755   41177 start.go:296] selected driver: docker
	I0224 15:33:41.160775   41177 start.go:857] validating driver "docker" against &{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-122000 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:33:41.160876   41177 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 15:33:41.164225   41177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:33:41.320734   41177 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:false NGoroutines:61 SystemTime:2023-02-24 23:33:41.217940441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:33:41.320873   41177 cni.go:84] Creating CNI manager for ""
	I0224 15:33:41.320889   41177 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 15:33:41.320906   41177 start_flags.go:319] config:
	{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP:}
	I0224 15:33:41.342771   41177 out.go:177] * Starting control plane node kubernetes-upgrade-122000 in cluster kubernetes-upgrade-122000
	I0224 15:33:41.363496   41177 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:33:41.384280   41177 out.go:177] * Pulling base image ...
	I0224 15:33:41.426627   41177 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:33:41.426631   41177 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:33:41.426688   41177 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 15:33:41.426702   41177 cache.go:57] Caching tarball of preloaded images
	I0224 15:33:41.426836   41177 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:33:41.426849   41177 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 15:33:41.427521   41177 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/config.json ...
	I0224 15:33:41.494986   41177 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:33:41.495009   41177 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:33:41.495033   41177 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:33:41.495082   41177 start.go:364] acquiring machines lock for kubernetes-upgrade-122000: {Name:mk652cd91b310ddade995e61f87af59023b8312d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:33:41.495194   41177 start.go:368] acquired machines lock for "kubernetes-upgrade-122000" in 89.605µs
	I0224 15:33:41.495228   41177 start.go:96] Skipping create...Using existing machine configuration
	I0224 15:33:41.495237   41177 fix.go:55] fixHost starting: 
	I0224 15:33:41.495579   41177 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-122000 --format={{.State.Status}}
	I0224 15:33:41.561073   41177 fix.go:103] recreateIfNeeded on kubernetes-upgrade-122000: state=Running err=<nil>
	W0224 15:33:41.561113   41177 fix.go:129] unexpected machine state, will restart: <nil>
	I0224 15:33:41.582454   41177 out.go:177] * Updating the running docker "kubernetes-upgrade-122000" container ...
	I0224 15:33:40.987433   41047 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 15:33:40.987521   41047 cli_runner.go:164] Run: docker exec -t kindnet-416000 dig +short host.docker.internal
	I0224 15:33:41.163608   41047 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:33:41.163717   41047 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:33:41.168535   41047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:33:41.179760   41047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-416000
	I0224 15:33:41.245221   41047 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:33:41.245318   41047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:33:41.269022   41047 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 15:33:41.269038   41047 docker.go:560] Images already preloaded, skipping extraction
	I0224 15:33:41.269142   41047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:33:41.291950   41047 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 15:33:41.291962   41047 cache_images.go:84] Images are preloaded, skipping loading
	I0224 15:33:41.292039   41047 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:33:41.321670   41047 cni.go:84] Creating CNI manager for "kindnet"
	I0224 15:33:41.321694   41047 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:33:41.321717   41047 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-416000 NodeName:kindnet-416000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:33:41.321828   41047 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kindnet-416000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:33:41.321914   41047 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kindnet-416000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kindnet-416000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0224 15:33:41.321973   41047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 15:33:41.330901   41047 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:33:41.331009   41047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 15:33:41.339188   41047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (446 bytes)
	I0224 15:33:41.353339   41047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:33:41.368199   41047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0224 15:33:41.382047   41047 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:33:41.386373   41047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:33:41.396608   41047 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000 for IP: 192.168.76.2
	I0224 15:33:41.396631   41047 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:41.396819   41047 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:33:41.396887   41047 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:33:41.396932   41047 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.key
	I0224 15:33:41.396947   41047 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt with IP's: []
	I0224 15:33:41.456271   41047 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt ...
	I0224 15:33:41.456288   41047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: {Name:mkd047b94809d1a1639f65c48658da5975b2070f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:41.456619   41047 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.key ...
	I0224 15:33:41.456628   41047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.key: {Name:mk45c83d1c37507038da93162d859e875776a79e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:41.456852   41047 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.key.31bdca25
	I0224 15:33:41.456872   41047 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 15:33:41.542756   41047 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.crt.31bdca25 ...
	I0224 15:33:41.542771   41047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.crt.31bdca25: {Name:mkfc2861e5aff51d8d9fab2cd721611aea25a36a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:41.543098   41047 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.key.31bdca25 ...
	I0224 15:33:41.543107   41047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.key.31bdca25: {Name:mk12ca5140b281070e2f1118d28adbaf1aafd54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:41.543311   41047 certs.go:333] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.crt
	I0224 15:33:41.543483   41047 certs.go:337] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.key
	I0224 15:33:41.543644   41047 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/proxy-client.key
	I0224 15:33:41.543659   41047 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/proxy-client.crt with IP's: []
	I0224 15:33:41.714786   41047 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/proxy-client.crt ...
	I0224 15:33:41.714802   41047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/proxy-client.crt: {Name:mk01ffccd24af492b24af2680720880e1d1004b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:41.715132   41047 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/proxy-client.key ...
	I0224 15:33:41.715146   41047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/proxy-client.key: {Name:mk33e66fc378a426c68d944c6982b7cab1072f9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:41.715605   41047 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:33:41.715664   41047 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:33:41.715677   41047 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:33:41.715718   41047 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:33:41.715762   41047 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:33:41.715804   41047 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:33:41.715893   41047 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:33:41.716541   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 15:33:41.736012   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 15:33:41.754512   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 15:33:41.773364   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 15:33:41.791486   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:33:41.810320   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:33:41.832795   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:33:41.853821   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:33:41.874097   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:33:41.894304   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:33:41.912296   41047 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:33:41.931087   41047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 15:33:41.946751   41047 ssh_runner.go:195] Run: openssl version
	I0224 15:33:41.954104   41047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:33:41.963748   41047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:33:41.967991   41047 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:33:41.968044   41047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:33:41.973925   41047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:33:41.982579   41047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:33:41.992755   41047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:33:41.996966   41047 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:33:41.997042   41047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:33:42.002842   41047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:33:42.012018   41047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:33:42.021100   41047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:33:42.025914   41047 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:33:42.025986   41047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:33:42.032515   41047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:33:42.042846   41047 kubeadm.go:401] StartCluster: {Name:kindnet-416000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-416000 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:33:42.042946   41047 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:33:42.065288   41047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 15:33:42.074390   41047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:33:42.083671   41047 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:33:42.083742   41047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:33:42.041351   41047 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:33:42.041380   41047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:33:42.104356   41047 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0224 15:33:42.104415   41047 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:33:42.228088   41047 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:33:42.228190   41047 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:33:42.228277   41047 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:33:42.380861   41047 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:33:42.426295   41047 out.go:204]   - Generating certificates and keys ...
	I0224 15:33:42.426382   41047 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:33:42.426443   41047 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:33:42.595066   41047 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 15:33:42.717142   41047 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 15:33:42.774824   41047 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 15:33:42.863468   41047 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 15:33:42.964637   41047 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 15:33:42.964776   41047 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-416000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0224 15:33:43.018322   41047 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 15:33:43.018458   41047 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-416000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0224 15:33:43.092809   41047 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 15:33:43.236101   41047 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 15:33:43.354647   41047 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 15:33:43.354774   41047 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:33:43.408548   41047 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:33:43.531522   41047 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:33:43.652538   41047 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:33:43.802818   41047 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:33:43.814735   41047 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:33:43.815377   41047 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:33:43.815421   41047 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 15:33:43.891580   41047 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:33:43.915889   41047 out.go:204]   - Booting up control plane ...
	I0224 15:33:43.916053   41047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:33:43.916182   41047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:33:43.916249   41047 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:33:43.916372   41047 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:33:43.916565   41047 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:33:41.603473   41177 machine.go:88] provisioning docker machine ...
	I0224 15:33:41.603510   41177 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-122000"
	I0224 15:33:41.603617   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:41.666827   41177 main.go:141] libmachine: Using SSH client type: native
	I0224 15:33:41.667253   41177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59773 <nil> <nil>}
	I0224 15:33:41.667276   41177 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-122000 && echo "kubernetes-upgrade-122000" | sudo tee /etc/hostname
	I0224 15:33:41.811146   41177 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-122000
	
	I0224 15:33:41.811232   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:41.876768   41177 main.go:141] libmachine: Using SSH client type: native
	I0224 15:33:41.877132   41177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59773 <nil> <nil>}
	I0224 15:33:41.877147   41177 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-122000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-122000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-122000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:33:42.009879   41177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:33:42.009904   41177 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:33:42.009939   41177 ubuntu.go:177] setting up certificates
	I0224 15:33:42.009950   41177 provision.go:83] configureAuth start
	I0224 15:33:42.010044   41177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-122000
	I0224 15:33:42.074740   41177 provision.go:138] copyHostCerts
	I0224 15:33:42.074845   41177 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:33:42.074855   41177 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:33:42.074954   41177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:33:42.075173   41177 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:33:42.075179   41177 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:33:42.075243   41177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:33:42.075391   41177 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:33:42.075396   41177 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:33:42.075459   41177 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:33:42.075590   41177 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-122000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-122000]
	I0224 15:33:42.305035   41177 provision.go:172] copyRemoteCerts
	I0224 15:33:42.305109   41177 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:33:42.305167   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:42.370637   41177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59773 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:33:42.465774   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:33:42.485592   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0224 15:33:42.505112   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 15:33:42.526371   41177 provision.go:86] duration metric: configureAuth took 567.83644ms
	I0224 15:33:42.526392   41177 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:33:42.526570   41177 config.go:182] Loaded profile config "kubernetes-upgrade-122000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:33:42.526650   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:42.595279   41177 main.go:141] libmachine: Using SSH client type: native
	I0224 15:33:42.595718   41177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59773 <nil> <nil>}
	I0224 15:33:42.595733   41177 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:33:42.733001   41177 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:33:42.733019   41177 ubuntu.go:71] root file system type: overlay
	I0224 15:33:42.733111   41177 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:33:42.733201   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:42.800620   41177 main.go:141] libmachine: Using SSH client type: native
	I0224 15:33:42.801067   41177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59773 <nil> <nil>}
	I0224 15:33:42.801124   41177 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:33:42.945686   41177 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:33:42.945773   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:43.009596   41177 main.go:141] libmachine: Using SSH client type: native
	I0224 15:33:43.010000   41177 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 59773 <nil> <nil>}
	I0224 15:33:43.010015   41177 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:33:43.149516   41177 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:33:43.149534   41177 machine.go:91] provisioned docker machine in 1.597451432s
	I0224 15:33:43.149547   41177 start.go:300] post-start starting for "kubernetes-upgrade-122000" (driver="docker")
	I0224 15:33:43.149555   41177 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:33:43.149648   41177 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:33:43.149711   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:43.212504   41177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59773 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:33:43.310303   41177 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:33:43.314587   41177 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:33:43.314610   41177 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:33:43.314617   41177 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:33:43.314622   41177 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:33:43.314631   41177 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:33:43.314744   41177 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:33:43.314907   41177 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:33:43.315095   41177 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:33:43.323515   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:33:43.342982   41177 start.go:303] post-start completed in 193.420073ms
	I0224 15:33:43.343073   41177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:33:43.343137   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:43.407377   41177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59773 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:33:43.500773   41177 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:33:43.506512   41177 fix.go:57] fixHost completed within 2.062663439s
	I0224 15:33:43.506529   41177 start.go:83] releasing machines lock for "kubernetes-upgrade-122000", held for 2.062720927s
	I0224 15:33:43.506622   41177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-122000
	I0224 15:33:43.570965   41177 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 15:33:43.570965   41177 ssh_runner.go:195] Run: cat /version.json
	I0224 15:33:43.571079   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:43.571079   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:43.643697   41177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59773 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:33:43.644379   41177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59773 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:33:43.792809   41177 ssh_runner.go:195] Run: systemctl --version
	I0224 15:33:43.798546   41177 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 15:33:43.804213   41177 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 15:33:43.804281   41177 ssh_runner.go:195] Run: which cri-dockerd
	I0224 15:33:43.808875   41177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 15:33:43.817541   41177 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 15:33:43.831644   41177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0224 15:33:43.842929   41177 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0224 15:33:43.851918   41177 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0224 15:33:43.851936   41177 start.go:485] detecting cgroup driver to use...
	I0224 15:33:43.851949   41177 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:33:43.852039   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:33:43.866467   41177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 15:33:43.875904   41177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:33:43.885218   41177 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:33:43.885287   41177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:33:43.895860   41177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:33:43.918393   41177 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:33:43.928206   41177 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:33:43.938195   41177 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:33:43.947785   41177 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:33:43.957544   41177 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:33:43.965572   41177 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:33:43.974498   41177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:33:44.064489   41177 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:33:45.103687   41177 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (1.039161139s)
	I0224 15:33:45.103704   41177 start.go:485] detecting cgroup driver to use...
	I0224 15:33:45.103717   41177 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:33:45.103783   41177 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:33:45.114631   41177 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:33:45.114704   41177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:33:45.125638   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:33:45.140443   41177 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:33:45.247198   41177 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:33:45.353458   41177 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:33:45.353478   41177 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:33:45.368211   41177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:33:45.487280   41177 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:33:45.830531   41177 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:33:45.910153   41177 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 15:33:45.984061   41177 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:33:46.088650   41177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:33:46.339338   41177 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 15:33:46.368935   41177 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 15:33:46.369046   41177 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 15:33:46.373949   41177 start.go:553] Will wait 60s for crictl version
	I0224 15:33:46.374033   41177 ssh_runner.go:195] Run: which crictl
	I0224 15:33:46.378503   41177 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 15:33:46.551230   41177 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 15:33:46.551311   41177 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:33:46.583181   41177 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:33:46.669366   41177 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 15:33:46.669467   41177 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-122000 dig +short host.docker.internal
	I0224 15:33:46.789367   41177 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:33:46.789519   41177 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:33:46.794469   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:46.859482   41177 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:33:46.859570   41177 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:33:46.881889   41177 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0224 15:33:46.881910   41177 docker.go:560] Images already preloaded, skipping extraction
	I0224 15:33:46.882017   41177 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:33:46.907337   41177 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0224 15:33:46.907355   41177 cache_images.go:84] Images are preloaded, skipping loading
	I0224 15:33:46.907444   41177 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:33:46.938434   41177 cni.go:84] Creating CNI manager for ""
	I0224 15:33:46.938452   41177 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 15:33:46.938474   41177 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:33:46.938493   41177 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-122000 NodeName:kubernetes-upgrade-122000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:33:46.938621   41177 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-122000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:33:46.938719   41177 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-122000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:33:46.938789   41177 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 15:33:46.949365   41177 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:33:46.949427   41177 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 15:33:46.959318   41177 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (457 bytes)
	I0224 15:33:46.975200   41177 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:33:46.993611   41177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0224 15:33:47.054699   41177 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:33:47.060316   41177 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000 for IP: 192.168.67.2
	I0224 15:33:47.060347   41177 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:47.060640   41177 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:33:47.060703   41177 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:33:47.060809   41177 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.key
	I0224 15:33:47.060887   41177 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.key.c7fa3a9e
	I0224 15:33:47.060969   41177 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.key
	I0224 15:33:47.061217   41177 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:33:47.061269   41177 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:33:47.061284   41177 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:33:47.061328   41177 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:33:47.061367   41177 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:33:47.061409   41177 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:33:47.061493   41177 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:33:47.062133   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 15:33:47.141429   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 15:33:47.165745   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 15:33:47.244211   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 15:33:47.341237   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:33:47.368669   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:33:47.440316   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:33:47.458824   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:33:47.477992   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:33:47.497844   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:33:47.517423   41177 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:33:47.544002   41177 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 15:33:47.558059   41177 ssh_runner.go:195] Run: openssl version
	I0224 15:33:47.564370   41177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:33:47.574090   41177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:33:47.578790   41177 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:33:47.578843   41177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:33:47.585141   41177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:33:47.594656   41177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:33:47.605500   41177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:33:47.610783   41177 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:33:47.610842   41177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:33:47.617401   41177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:33:47.625886   41177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:33:47.634729   41177 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:33:47.641584   41177 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:33:47.641647   41177 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:33:47.648094   41177 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:33:47.656693   41177 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:33:47.656814   41177 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:33:47.680622   41177 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 15:33:47.689992   41177 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0224 15:33:47.690011   41177 kubeadm.go:633] restartCluster start
	I0224 15:33:47.690067   41177 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 15:33:47.697721   41177 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:33:47.697798   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:47.764441   41177 kubeconfig.go:92] found "kubernetes-upgrade-122000" server: "https://127.0.0.1:59772"
	I0224 15:33:47.765051   41177 kapi.go:59] client config for kubernetes-upgrade-122000: &rest.Config{Host:"https://127.0.0.1:59772", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:33:47.765840   41177 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 15:33:47.774398   41177 api_server.go:165] Checking apiserver status ...
	I0224 15:33:47.774464   41177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:33:47.784454   41177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/11769/cgroup
	W0224 15:33:47.794110   41177 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/11769/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:33:47.794189   41177 ssh_runner.go:195] Run: ls
	I0224 15:33:47.799228   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:49.769491   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 15:33:49.769552   41177 retry.go:31] will retry after 247.226066ms: https://127.0.0.1:59772/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 15:33:50.018364   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:50.025306   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:33:50.025323   41177 retry.go:31] will retry after 287.089628ms: https://127.0.0.1:59772/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:33:50.313213   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:50.318724   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:33:50.318743   41177 retry.go:31] will retry after 333.226178ms: https://127.0.0.1:59772/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:33:52.901570   41047 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.001715 seconds
	I0224 15:33:52.901679   41047 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 15:33:52.911187   41047 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 15:33:53.426109   41047 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 15:33:53.426250   41047 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-416000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 15:33:53.936291   41047 kubeadm.go:322] [bootstrap-token] Using token: m389ar.0bqaz9wfhpvkib59
	I0224 15:33:53.960831   41047 out.go:204]   - Configuring RBAC rules ...
	I0224 15:33:53.960913   41047 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 15:33:54.001656   41047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 15:33:54.008383   41047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 15:33:54.011308   41047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 15:33:54.014371   41047 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 15:33:54.016555   41047 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 15:33:54.025076   41047 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 15:33:54.168662   41047 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0224 15:33:54.437632   41047 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0224 15:33:54.439001   41047 kubeadm.go:322] 
	I0224 15:33:54.439140   41047 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0224 15:33:54.439156   41047 kubeadm.go:322] 
	I0224 15:33:54.439315   41047 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0224 15:33:54.439324   41047 kubeadm.go:322] 
	I0224 15:33:54.439345   41047 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0224 15:33:54.440961   41047 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 15:33:54.441091   41047 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 15:33:54.441109   41047 kubeadm.go:322] 
	I0224 15:33:54.441182   41047 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0224 15:33:54.441202   41047 kubeadm.go:322] 
	I0224 15:33:54.441335   41047 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 15:33:54.441348   41047 kubeadm.go:322] 
	I0224 15:33:54.441426   41047 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0224 15:33:54.441535   41047 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 15:33:54.441621   41047 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 15:33:54.441629   41047 kubeadm.go:322] 
	I0224 15:33:54.441755   41047 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 15:33:54.441870   41047 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0224 15:33:54.441882   41047 kubeadm.go:322] 
	I0224 15:33:54.442010   41047 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token m389ar.0bqaz9wfhpvkib59 \
	I0224 15:33:54.442189   41047 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e \
	I0224 15:33:54.442253   41047 kubeadm.go:322] 	--control-plane 
	I0224 15:33:54.442271   41047 kubeadm.go:322] 
	I0224 15:33:54.442382   41047 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0224 15:33:54.442396   41047 kubeadm.go:322] 
	I0224 15:33:54.442502   41047 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token m389ar.0bqaz9wfhpvkib59 \
	I0224 15:33:54.442633   41047 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e 
	I0224 15:33:54.448041   41047 kubeadm.go:322] W0224 23:33:42.145966    1294 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 15:33:54.448201   41047 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0224 15:33:54.448373   41047 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:33:54.448400   41047 cni.go:84] Creating CNI manager for "kindnet"
	I0224 15:33:54.494601   41047 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0224 15:33:54.531125   41047 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0224 15:33:54.536940   41047 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0224 15:33:54.536954   41047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0224 15:33:54.554156   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0224 15:33:50.652531   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:50.658886   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:33:50.658902   41177 retry.go:31] will retry after 608.125102ms: https://127.0.0.1:59772/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:33:51.267106   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:51.272476   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 200:
	ok
	I0224 15:33:51.284675   41177 system_pods.go:86] 5 kube-system pods found
	I0224 15:33:51.284691   41177 system_pods.go:89] "etcd-kubernetes-upgrade-122000" [bf200df1-1acf-4b42-91bd-9064be263e40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 15:33:51.284696   41177 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-122000" [a7fde842-cbb4-4057-99bd-0c83ba33075c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 15:33:51.284706   41177 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-122000" [55a5d916-1887-4394-8fac-e9f14adb929e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 15:33:51.284713   41177 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-122000" [7ba4fe30-897c-4d67-9333-3eceea9951da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 15:33:51.284718   41177 system_pods.go:89] "storage-provisioner" [b9e9f490-9719-4f75-a1fd-d8c28f4cd08f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0224 15:33:51.284724   41177 kubeadm.go:617] needs reconfigure: missing components: kube-dns, kube-proxy
	I0224 15:33:51.284731   41177 kubeadm.go:1120] stopping kube-system containers ...
	I0224 15:33:51.284798   41177 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:33:51.306685   41177 docker.go:456] Stopping containers: [94ddd0f32d80 33e53e2a1062 0f0e26bb01ff d3137f4ecec7 8a99d0d6ebf1 f16be056fb3b b86b7e885e97 fea91f3d6985 dc39e5b54046 39ae3a1af45f 67d573e544e0 9e242aef2009 0d65af4cd092 4944a5734e8c 5c9453a8e198 f5f62df3a47f]
	I0224 15:33:51.306761   41177 ssh_runner.go:195] Run: docker stop 94ddd0f32d80 33e53e2a1062 0f0e26bb01ff d3137f4ecec7 8a99d0d6ebf1 f16be056fb3b b86b7e885e97 fea91f3d6985 dc39e5b54046 39ae3a1af45f 67d573e544e0 9e242aef2009 0d65af4cd092 4944a5734e8c 5c9453a8e198 f5f62df3a47f
	I0224 15:33:52.268237   41177 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 15:33:52.354170   41177 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:33:52.370454   41177 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 24 23:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb 24 23:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb 24 23:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 24 23:33 /etc/kubernetes/scheduler.conf
	
	I0224 15:33:52.370522   41177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 15:33:52.382120   41177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 15:33:52.393881   41177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 15:33:52.406677   41177 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:33:52.406757   41177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 15:33:52.444295   41177 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 15:33:52.457790   41177 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:33:52.457885   41177 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 15:33:52.467550   41177 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:33:52.477839   41177 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0224 15:33:52.477863   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:33:52.534395   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:33:53.216180   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:33:53.365614   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:33:53.443110   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:33:53.545840   41177 api_server.go:51] waiting for apiserver process to appear ...
	I0224 15:33:53.545919   41177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:33:54.063145   41177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:33:54.563159   41177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:33:55.063245   41177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:33:55.077223   41177 api_server.go:71] duration metric: took 1.531356397s to wait for apiserver process to appear ...
	I0224 15:33:55.077254   41177 api_server.go:87] waiting for apiserver healthz status ...
	I0224 15:33:55.077269   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:56.917263   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 15:33:56.917287   41177 api_server.go:102] status: https://127.0.0.1:59772/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 15:33:57.417707   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:57.423128   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 15:33:57.423145   41177 api_server.go:102] status: https://127.0.0.1:59772/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:33:57.917453   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:57.926687   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 15:33:57.926712   41177 api_server.go:102] status: https://127.0.0.1:59772/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:33:58.417398   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:58.422770   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 200:
	ok
	I0224 15:33:58.429971   41177 api_server.go:140] control plane version: v1.26.1
	I0224 15:33:58.429982   41177 api_server.go:130] duration metric: took 3.352683233s to wait for apiserver health ...
	I0224 15:33:58.429988   41177 cni.go:84] Creating CNI manager for ""
	I0224 15:33:58.429996   41177 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 15:33:58.454076   41177 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 15:33:58.474177   41177 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 15:33:58.483166   41177 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0224 15:33:58.496441   41177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 15:33:58.502854   41177 system_pods.go:59] 5 kube-system pods found
	I0224 15:33:58.502872   41177 system_pods.go:61] "etcd-kubernetes-upgrade-122000" [bf200df1-1acf-4b42-91bd-9064be263e40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 15:33:58.502878   41177 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-122000" [a7fde842-cbb4-4057-99bd-0c83ba33075c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 15:33:58.502884   41177 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-122000" [55a5d916-1887-4394-8fac-e9f14adb929e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 15:33:58.502890   41177 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-122000" [7ba4fe30-897c-4d67-9333-3eceea9951da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 15:33:58.502894   41177 system_pods.go:61] "storage-provisioner" [b9e9f490-9719-4f75-a1fd-d8c28f4cd08f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0224 15:33:58.502898   41177 system_pods.go:74] duration metric: took 6.446795ms to wait for pod list to return data ...
	I0224 15:33:58.502907   41177 node_conditions.go:102] verifying NodePressure condition ...
	I0224 15:33:58.506014   41177 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:33:58.506028   41177 node_conditions.go:123] node cpu capacity is 6
	I0224 15:33:58.506038   41177 node_conditions.go:105] duration metric: took 3.127123ms to run NodePressure ...
	I0224 15:33:58.506052   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:33:58.647275   41177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 15:33:58.654803   41177 ops.go:34] apiserver oom_adj: -16
	I0224 15:33:58.654812   41177 kubeadm.go:637] restartCluster took 10.964664501s
	I0224 15:33:58.654817   41177 kubeadm.go:403] StartCluster complete in 10.998000728s
	I0224 15:33:58.654830   41177 settings.go:142] acquiring lock: {Name:mk61f6764f7c264302b01ffc8eee0ee0f10d20c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:58.654923   41177 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:33:58.655405   41177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/kubeconfig: {Name:mk1182fefc6ba3b7ea2d2356c47127703005a4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:33:58.655651   41177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 15:33:58.655689   41177 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0224 15:33:58.655735   41177 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-122000"
	I0224 15:33:58.655743   41177 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-122000"
	I0224 15:33:58.655751   41177 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-122000"
	W0224 15:33:58.655757   41177 addons.go:236] addon storage-provisioner should already be in state true
	I0224 15:33:58.655769   41177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-122000"
	I0224 15:33:58.655802   41177 host.go:66] Checking if "kubernetes-upgrade-122000" exists ...
	I0224 15:33:58.655843   41177 config.go:182] Loaded profile config "kubernetes-upgrade-122000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:33:58.656065   41177 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-122000 --format={{.State.Status}}
	I0224 15:33:58.656123   41177 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-122000 --format={{.State.Status}}
	I0224 15:33:58.656150   41177 kapi.go:59] client config for kubernetes-upgrade-122000: &rest.Config{Host:"https://127.0.0.1:59772", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:33:58.663086   41177 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-122000" context rescaled to 1 replicas
	I0224 15:33:58.663115   41177 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 15:33:58.684722   41177 out.go:177] * Verifying Kubernetes components...
	I0224 15:33:58.744379   41177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:33:58.752979   41177 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0224 15:33:58.760497   41177 kapi.go:59] client config for kubernetes-upgrade-122000: &rest.Config{Host:"https://127.0.0.1:59772", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubernetes-upgrade-122000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0224 15:33:58.761081   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:58.781346   41177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 15:33:58.789220   41177 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-122000"
	W0224 15:33:58.802222   41177 addons.go:236] addon default-storageclass should already be in state true
	I0224 15:33:58.802269   41177 host.go:66] Checking if "kubernetes-upgrade-122000" exists ...
	I0224 15:33:58.802402   41177 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 15:33:58.802414   41177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 15:33:58.802518   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:58.803214   41177 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-122000 --format={{.State.Status}}
	I0224 15:33:58.846273   41177 api_server.go:51] waiting for apiserver process to appear ...
	I0224 15:33:58.846349   41177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:33:58.858481   41177 api_server.go:71] duration metric: took 195.334435ms to wait for apiserver process to appear ...
	I0224 15:33:58.858504   41177 api_server.go:87] waiting for apiserver healthz status ...
	I0224 15:33:58.858526   41177 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59772/healthz ...
	I0224 15:33:58.866807   41177 api_server.go:278] https://127.0.0.1:59772/healthz returned 200:
	ok
	I0224 15:33:58.868685   41177 api_server.go:140] control plane version: v1.26.1
	I0224 15:33:58.868701   41177 api_server.go:130] duration metric: took 10.190422ms to wait for apiserver health ...
	I0224 15:33:58.868711   41177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 15:33:58.871912   41177 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 15:33:58.871925   41177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 15:33:58.872000   41177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-122000
	I0224 15:33:58.872129   41177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59773 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:33:58.875773   41177 system_pods.go:59] 5 kube-system pods found
	I0224 15:33:58.875799   41177 system_pods.go:61] "etcd-kubernetes-upgrade-122000" [bf200df1-1acf-4b42-91bd-9064be263e40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 15:33:58.875807   41177 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-122000" [a7fde842-cbb4-4057-99bd-0c83ba33075c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0224 15:33:58.875835   41177 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-122000" [55a5d916-1887-4394-8fac-e9f14adb929e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 15:33:58.875845   41177 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-122000" [7ba4fe30-897c-4d67-9333-3eceea9951da] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0224 15:33:58.875851   41177 system_pods.go:61] "storage-provisioner" [b9e9f490-9719-4f75-a1fd-d8c28f4cd08f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0224 15:33:58.875856   41177 system_pods.go:74] duration metric: took 7.140029ms to wait for pod list to return data ...
	I0224 15:33:58.875863   41177 kubeadm.go:578] duration metric: took 212.725889ms to wait for : map[apiserver:true system_pods:true] ...
	I0224 15:33:58.875874   41177 node_conditions.go:102] verifying NodePressure condition ...
	I0224 15:33:58.879470   41177 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:33:58.879486   41177 node_conditions.go:123] node cpu capacity is 6
	I0224 15:33:58.879499   41177 node_conditions.go:105] duration metric: took 3.621622ms to run NodePressure ...
	I0224 15:33:58.879507   41177 start.go:228] waiting for startup goroutines ...
	I0224 15:33:58.939470   41177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59773 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/kubernetes-upgrade-122000/id_rsa Username:docker}
	I0224 15:33:58.978827   41177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 15:33:59.049995   41177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 15:33:59.669083   41177 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0224 15:33:59.727537   41177 addons.go:492] enable addons completed in 1.071838558s: enabled=[storage-provisioner default-storageclass]
	I0224 15:33:59.727628   41177 start.go:233] waiting for cluster config update ...
	I0224 15:33:59.727657   41177 start.go:242] writing updated cluster config ...
	I0224 15:33:59.728209   41177 ssh_runner.go:195] Run: rm -f paused
	I0224 15:33:59.768420   41177 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0224 15:33:59.791346   41177 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-122000" cluster and "default" namespace by default
	I0224 15:33:55.195296   41047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 15:33:55.195421   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=08976559d74fb9c2654733dc21cb8f9d9ec24374 minikube.k8s.io/name=kindnet-416000 minikube.k8s.io/updated_at=2023_02_24T15_33_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:55.195425   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:55.279133   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:55.305760   41047 ops.go:34] apiserver oom_adj: -16
	I0224 15:33:55.887588   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:56.388038   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:56.888726   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:57.387663   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:57.887672   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:58.387692   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:58.887546   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 15:33:59.388236   41047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 23:29:02 UTC, end at Fri 2023-02-24 23:34:01 UTC. --
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.705361234Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.705380224Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.705398838Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.705415547Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.705437695Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.705467069Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.705725884Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.705793650Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.706269521Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.716790705Z" level=info msg="Loading containers: start."
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.812760322Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.848400936Z" level=info msg="Loading containers: done."
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.857240196Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.857308279Z" level=info msg="Daemon has completed initialization"
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.879431498Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 23:33:45 kubernetes-upgrade-122000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.886565130Z" level=info msg="API listen on [::]:2376"
	Feb 24 23:33:45 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:45.889446919Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 24 23:33:51 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:51.422780162Z" level=info msg="ignoring event" container=8a99d0d6ebf1bd626197b1f395c625174b42655ae32e9bf6ebed06f882068a1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:33:51 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:51.423391873Z" level=info msg="ignoring event" container=f16be056fb3bc52b0a15829a66ba2799e98fe45b0812e38d2448b3a71ad37f40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:33:51 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:51.424417590Z" level=info msg="ignoring event" container=d3137f4ecec7a415e80edc40630c1739c54b6dcccd22538d7ee4a92b81621150 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:33:51 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:51.431531196Z" level=info msg="ignoring event" container=b86b7e885e977fe4b7178417de02e453c9563fd740c061b853e230a87612ba00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:33:51 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:51.435676290Z" level=info msg="ignoring event" container=94ddd0f32d8059b25e1b9c75fba4fbc5160e548ae69fed4617ae15e09c3f230d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:33:51 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:51.489451084Z" level=info msg="ignoring event" container=33e53e2a106242dc782bfb12f0ec1d845d849935a3b2790a55a1c0ff6ae89cec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 23:33:52 kubernetes-upgrade-122000 dockerd[11360]: time="2023-02-24T23:33:52.288050023Z" level=info msg="ignoring event" container=0f0e26bb01ff3e781f046f4446aef9fd6d824612b24390dbfc921ea84231edb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9fb76ab505f22       deb04688c4a35       7 seconds ago       Running             kube-apiserver            2                   8bf3d7bd35273
	99022d65dc20d       fce326961ae2d       7 seconds ago       Running             etcd                      2                   8c413f2a089aa
	ad037f38ae434       655493523f607       7 seconds ago       Running             kube-scheduler            2                   b170d25f1550e
	1a527d0f94085       e9c08e11b07f6       7 seconds ago       Running             kube-controller-manager   1                   9c8ca6ff8e2bf
	94ddd0f32d805       655493523f607       14 seconds ago      Exited              kube-scheduler            1                   8a99d0d6ebf1b
	33e53e2a10624       fce326961ae2d       14 seconds ago      Exited              etcd                      1                   b86b7e885e977
	0f0e26bb01ff3       deb04688c4a35       15 seconds ago      Exited              kube-apiserver            1                   f16be056fb3bc
	dc39e5b540461       e9c08e11b07f6       30 seconds ago      Exited              kube-controller-manager   0                   f5f62df3a47f1
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-122000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-122000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08976559d74fb9c2654733dc21cb8f9d9ec24374
	                    minikube.k8s.io/name=kubernetes-upgrade-122000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_24T15_33_38_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 24 Feb 2023 23:33:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-122000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 24 Feb 2023 23:33:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 24 Feb 2023 23:33:57 +0000   Fri, 24 Feb 2023 23:33:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 24 Feb 2023 23:33:57 +0000   Fri, 24 Feb 2023 23:33:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 24 Feb 2023 23:33:57 +0000   Fri, 24 Feb 2023 23:33:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 24 Feb 2023 23:33:57 +0000   Fri, 24 Feb 2023 23:33:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-122000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    892ae553-d6f4-4035-a8a5-8b0131f3b246
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-122000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         24s
	  kube-system                 kube-apiserver-kubernetes-upgrade-122000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-122000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-scheduler-kubernetes-upgrade-122000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 31s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s (x4 over 31s)  kubelet  Node kubernetes-upgrade-122000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s (x4 over 31s)  kubelet  Node kubernetes-upgrade-122000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s (x3 over 31s)  kubelet  Node kubernetes-upgrade-122000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  31s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 24s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  24s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24s                kubelet  Node kubernetes-upgrade-122000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s                kubelet  Node kubernetes-upgrade-122000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s                kubelet  Node kubernetes-upgrade-122000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20s                kubelet  Node kubernetes-upgrade-122000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000064] FS-Cache: O-key=[8] '235dc60400000000'
	[  +0.000061] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000043] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=000000001a032d23
	[  +0.000078] FS-Cache: N-key=[8] '235dc60400000000'
	[  +0.003038] FS-Cache: Duplicate cookie detected
	[  +0.000092] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000045] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=00000000c09690c2
	[  +0.000073] FS-Cache: O-key=[8] '235dc60400000000'
	[  +0.000050] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000048] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=00000000fa0a547a
	[  +0.000052] FS-Cache: N-key=[8] '235dc60400000000'
	[  +3.553193] FS-Cache: Duplicate cookie detected
	[  +0.000091] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000052] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=0000000081c9d0cb
	[  +0.000059] FS-Cache: O-key=[8] '225dc60400000000'
	[  +0.000031] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000052] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=0000000011fb7533
	[  +0.000047] FS-Cache: N-key=[8] '225dc60400000000'
	[  +0.400852] FS-Cache: Duplicate cookie detected
	[  +0.000038] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000057] FS-Cache: O-cookie d=00000000b6b06f96{9p.inode} n=00000000dd227ced
	[  +0.000061] FS-Cache: O-key=[8] '2b5dc60400000000'
	[  +0.000046] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000033] FS-Cache: N-cookie d=00000000b6b06f96{9p.inode} n=00000000cab77509
	[  +0.000067] FS-Cache: N-key=[8] '2b5dc60400000000'
	
	* 
	* ==> etcd [33e53e2a1062] <==
	* {"level":"info","ts":"2023-02-24T23:33:47.406Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T23:33:47.407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:33:47.407Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:33:47.407Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-24T23:33:47.407Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-24T23:33:48.596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-24T23:33:48.596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-24T23:33:48.596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-02-24T23:33:48.596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-02-24T23:33:48.596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-02-24T23:33:48.596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-02-24T23:33:48.596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-02-24T23:33:48.597Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-122000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T23:33:48.598Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T23:33:48.598Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T23:33:48.599Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T23:33:48.599Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-02-24T23:33:48.601Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T23:33:48.601Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T23:33:51.395Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-24T23:33:51.396Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-122000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-02-24T23:33:51.406Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-02-24T23:33:51.408Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-24T23:33:51.410Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-24T23:33:51.410Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-122000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [99022d65dc20] <==
	* {"level":"info","ts":"2023-02-24T23:33:54.337Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-24T23:33:54.337Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-24T23:33:54.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-02-24T23:33:54.337Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-02-24T23:33:54.338Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:33:54.338Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-24T23:33:54.338Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-24T23:33:54.338Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-24T23:33:54.338Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-02-24T23:33:54.338Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-24T23:33:54.338Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-24T23:33:55.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-24T23:33:55.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-24T23:33:55.866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-02-24T23:33:55.867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-02-24T23:33:55.867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-02-24T23:33:55.867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-02-24T23:33:55.867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-02-24T23:33:55.868Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-122000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-24T23:33:55.868Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T23:33:55.868Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-24T23:33:55.868Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-24T23:33:55.868Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-24T23:33:55.869Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-24T23:33:55.869Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  23:34:02 up  2:33,  0 users,  load average: 2.86, 1.80, 1.40
	Linux kubernetes-upgrade-122000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [0f0e26bb01ff] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 23:33:51.403731       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 23:33:51.403768       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0224 23:33:51.403807       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [9fb76ab505f2] <==
	* I0224 23:33:56.907726       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0224 23:33:56.907733       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0224 23:33:56.909070       1 autoregister_controller.go:141] Starting autoregister controller
	I0224 23:33:56.909077       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0224 23:33:56.909104       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0224 23:33:56.907199       1 customresource_discovery_controller.go:288] Starting DiscoveryController
	I0224 23:33:56.917574       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0224 23:33:56.917614       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0224 23:33:56.949558       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0224 23:33:57.008051       1 shared_informer.go:280] Caches are synced for configmaps
	I0224 23:33:57.008345       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0224 23:33:57.008565       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0224 23:33:57.008594       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0224 23:33:57.008749       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0224 23:33:57.008709       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0224 23:33:57.009409       1 cache.go:39] Caches are synced for autoregister controller
	I0224 23:33:57.020622       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0224 23:33:57.021461       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0224 23:33:57.737561       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0224 23:33:57.909505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0224 23:33:58.578744       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0224 23:33:58.584819       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0224 23:33:58.606511       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0224 23:33:58.621073       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0224 23:33:58.639328       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [1a527d0f9408] <==
	* I0224 23:34:00.359874       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I0224 23:34:00.359913       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges
	I0224 23:34:00.359924       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints
	I0224 23:34:00.359939       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for jobs.batch
	I0224 23:34:00.360014       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for daemonsets.apps
	I0224 23:34:00.360176       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I0224 23:34:00.360250       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0224 23:34:00.360264       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for statefulsets.apps
	I0224 23:34:00.360273       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for controllerrevisions.apps
	I0224 23:34:00.360284       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for deployments.apps
	I0224 23:34:00.360301       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for cronjobs.batch
	I0224 23:34:00.360313       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	W0224 23:34:00.360321       1 shared_informer.go:550] resyncPeriod 22h3m5.121496136s is smaller than resyncCheckPeriod 22h39m52.617723888s and the informer has already started. Changing it to 22h39m52.617723888s
	I0224 23:34:00.360381       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates
	I0224 23:34:00.360459       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for replicasets.apps
	I0224 23:34:00.360472       1 controllermanager.go:622] Started "resourcequota"
	I0224 23:34:00.360588       1 resource_quota_controller.go:277] Starting resource quota controller
	I0224 23:34:00.360595       1 shared_informer.go:273] Waiting for caches to sync for resource quota
	I0224 23:34:00.360604       1 resource_quota_monitor.go:295] QuotaMonitor running
	I0224 23:34:00.401195       1 controllermanager.go:622] Started "daemonset"
	I0224 23:34:00.401352       1 daemon_controller.go:265] Starting daemon sets controller
	I0224 23:34:00.401383       1 shared_informer.go:273] Waiting for caches to sync for daemon sets
	I0224 23:34:00.612434       1 controllermanager.go:622] Started "cronjob"
	I0224 23:34:00.612687       1 cronjob_controllerv2.go:137] "Starting cronjob controller v2"
	I0224 23:34:00.612748       1 shared_informer.go:273] Waiting for caches to sync for cronjob
	
	* 
	* ==> kube-controller-manager [dc39e5b54046] <==
	* I0224 23:33:36.546362       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates
	I0224 23:33:36.546397       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for cronjobs.batch
	I0224 23:33:36.546418       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I0224 23:33:36.546436       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
	I0224 23:33:36.546450       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	I0224 23:33:36.546516       1 controllermanager.go:622] Started "resourcequota"
	I0224 23:33:36.546551       1 resource_quota_controller.go:277] Starting resource quota controller
	I0224 23:33:36.546578       1 shared_informer.go:273] Waiting for caches to sync for resource quota
	I0224 23:33:36.546600       1 resource_quota_monitor.go:295] QuotaMonitor running
	I0224 23:33:36.680890       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving"
	I0224 23:33:36.680945       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0224 23:33:36.680947       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0224 23:33:36.681134       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
	I0224 23:33:36.681185       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0224 23:33:36.681170       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0224 23:33:36.681491       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0224 23:33:36.681545       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0224 23:33:36.681523       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0224 23:33:36.681606       1 controllermanager.go:622] Started "csrsigning"
	I0224 23:33:36.681636       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I0224 23:33:36.681665       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0224 23:33:36.681681       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0224 23:33:36.832342       1 controllermanager.go:622] Started "ttl"
	I0224 23:33:36.832361       1 ttl_controller.go:120] Starting TTL controller
	I0224 23:33:36.832432       1 shared_informer.go:273] Waiting for caches to sync for TTL
	
	* 
	* ==> kube-scheduler [94ddd0f32d80] <==
	* I0224 23:33:48.014329       1 serving.go:348] Generated self-signed cert in-memory
	W0224 23:33:49.839997       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 23:33:49.840240       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 23:33:49.840301       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 23:33:49.840453       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 23:33:49.894560       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0224 23:33:49.894604       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 23:33:49.895469       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0224 23:33:49.895554       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 23:33:49.898902       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 23:33:49.897491       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0224 23:33:50.000067       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 23:33:51.386985       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0224 23:33:51.387408       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0224 23:33:51.387606       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [ad037f38ae43] <==
	* I0224 23:33:55.057448       1 serving.go:348] Generated self-signed cert in-memory
	W0224 23:33:56.926070       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0224 23:33:56.926091       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0224 23:33:56.926099       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0224 23:33:56.926104       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0224 23:33:56.945802       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0224 23:33:56.945990       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 23:33:56.948090       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0224 23:33:56.948297       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0224 23:33:56.951855       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0224 23:33:56.952513       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0224 23:33:57.056010       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 23:29:02 UTC, end at Fri 2023-02-24 23:34:02 UTC. --
	Feb 24 23:33:53 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:53.907456   12696 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-122000"
	Feb 24 23:33:53 kubernetes-upgrade-122000 kubelet[12696]: E0224 23:33:53.907910   12696 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-122000"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.123259   12696 scope.go:115] "RemoveContainer" containerID="dc39e5b540461f2ff4f1dae0a362bc67ae257f5624aa876814ecefb05cd3392c"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.131270   12696 scope.go:115] "RemoveContainer" containerID="94ddd0f32d8059b25e1b9c75fba4fbc5160e548ae69fed4617ae15e09c3f230d"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.137583   12696 scope.go:115] "RemoveContainer" containerID="33e53e2a106242dc782bfb12f0ec1d845d849935a3b2790a55a1c0ff6ae89cec"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: E0224 23:33:54.143714   12696 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-122000?timeout=10s": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.318713   12696 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-122000"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: E0224 23:33:54.318988   12696 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-122000"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: W0224 23:33:54.644921   12696 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: E0224 23:33:54.644980   12696 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: W0224 23:33:54.650420   12696 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: E0224 23:33:54.650524   12696 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.788523   12696 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f16be056fb3bc52b0a15829a66ba2799e98fe45b0812e38d2448b3a71ad37f40"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.790237   12696 status_manager.go:698] "Failed to get status for pod" podUID=bf5369a186ebe791d7b0dda874cad7d9 pod="kube-system/kube-scheduler-kubernetes-upgrade-122000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-122000\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.797180   12696 status_manager.go:698] "Failed to get status for pod" podUID=f032ced407f8ce870a59be4a77a17b87 pod="kube-system/etcd-kubernetes-upgrade-122000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-122000\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.798490   12696 scope.go:115] "RemoveContainer" containerID="0f0e26bb01ff3e781f046f4446aef9fd6d824612b24390dbfc921ea84231edb0"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:54.931103   12696 status_manager.go:698] "Failed to get status for pod" podUID=f489848a22d68b59859eba3fb3ca2ae6 pod="kube-system/kube-apiserver-kubernetes-upgrade-122000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-122000\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Feb 24 23:33:54 kubernetes-upgrade-122000 kubelet[12696]: E0224 23:33:54.944892   12696 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-122000?timeout=10s": dial tcp 192.168.67.2:8443: connect: connection refused
	Feb 24 23:33:55 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:55.135341   12696 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-122000"
	Feb 24 23:33:57 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:57.043319   12696 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-122000"
	Feb 24 23:33:57 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:57.043416   12696 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-122000"
	Feb 24 23:33:57 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:57.482023   12696 apiserver.go:52] "Watching apiserver"
	Feb 24 23:33:57 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:57.491379   12696 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 24 23:33:57 kubernetes-upgrade-122000 kubelet[12696]: I0224 23:33:57.570085   12696 reconciler.go:41] "Reconciler: start to sync state"
	Feb 24 23:33:57 kubernetes-upgrade-122000 kubelet[12696]: E0224 23:33:57.889191   12696 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-122000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-122000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-122000 -n kubernetes-upgrade-122000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-122000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-122000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-122000 describe pod storage-provisioner: exit status 1 (55.44276ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-122000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-122000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-122000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-122000: (3.092243541s)
--- FAIL: TestKubernetesUpgrade (557.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (72.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
E0224 15:23:39.818710   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.822982838.exe start -p missing-upgrade-037000 --memory=2200 --driver=docker 
E0224 15:24:00.301611   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.822982838.exe start -p missing-upgrade-037000 --memory=2200 --driver=docker : exit status 78 (54.485405893s)

                                                
                                                
-- stdout --
	* [missing-upgrade-037000] minikube v1.9.1 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-037000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-037000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 102.86 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 356.17 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.08 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 6.76 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 14.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 28.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.73 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 47.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 63.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 71.26 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 78.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 92.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 103.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 145.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 150.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 169.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 191.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 212.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 221.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 231.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 238.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 257.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 262.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 280.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 301.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 307.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 328.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 351.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 355.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 359.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 367.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 371.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 378.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 385.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 413.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 422.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 436.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 453.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 464.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay
2-amd64.tar.lz4: 468.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 475.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 479.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 485.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 493.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 498.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 505.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 510.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 516.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 517.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 524.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-over
lay2-amd64.tar.lz4: 534.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:24:13.704861055 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-037000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:24:33.056180440 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.822982838.exe start -p missing-upgrade-037000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.822982838.exe start -p missing-upgrade-037000 --memory=2200 --driver=docker : exit status 70 (3.961570845s)

                                                
                                                
-- stdout --
	* [missing-upgrade-037000] minikube v1.9.1 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-037000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-037000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
E0224 15:24:41.263669   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.822982838.exe start -p missing-upgrade-037000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.822982838.exe start -p missing-upgrade-037000 --memory=2200 --driver=docker : exit status 70 (4.411881547s)

                                                
                                                
-- stdout --
	* [missing-upgrade-037000] minikube v1.9.1 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-037000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-037000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-02-24 15:24:45.875868 -0800 PST m=+2638.569826712
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-037000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-037000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5891574cb5fb8d4b8fa4e89279b89e8956515af2103aba82c28a2841430775b",
	        "Created": "2023-02-24T23:24:21.871657521Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 557811,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:24:22.10478619Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/d5891574cb5fb8d4b8fa4e89279b89e8956515af2103aba82c28a2841430775b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5891574cb5fb8d4b8fa4e89279b89e8956515af2103aba82c28a2841430775b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5891574cb5fb8d4b8fa4e89279b89e8956515af2103aba82c28a2841430775b/hosts",
	        "LogPath": "/var/lib/docker/containers/d5891574cb5fb8d4b8fa4e89279b89e8956515af2103aba82c28a2841430775b/d5891574cb5fb8d4b8fa4e89279b89e8956515af2103aba82c28a2841430775b-json.log",
	        "Name": "/missing-upgrade-037000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-037000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8bc56eb69fa82f67ac00e3ba6bccaa6b52e85749ed31ef16a68576c11034fec4-init/diff:/var/lib/docker/overlay2/c13d67ee259a145834be131d120f57a8b74b26f03264cbaa49a6e7f89c2695ea/diff:/var/lib/docker/overlay2/7aae925b332d02c79f368825822f5c8d6b8c82e1f82ded62b82e4e2aeef670bd/diff:/var/lib/docker/overlay2/7b4a4d59152394c69b456d14f028c31329891e6f604acbd7e712d9546261d2e4/diff:/var/lib/docker/overlay2/2aece4b18a46f3ca6fdf10cec972d712837ccf924a91645bc2b42b60dca891ab/diff:/var/lib/docker/overlay2/8308500ba2e3166db5789fd9526626bfa28ea6618735de4a023b242fe6c5d9e9/diff:/var/lib/docker/overlay2/57c2c56bd4013f092332d4f267fd259293e918d12beabad8147b8c31a4095c4c/diff:/var/lib/docker/overlay2/6e19fdf7d724140c232bc24d73d7ba4a37cc8e9416280d33565adf5cc6863599/diff:/var/lib/docker/overlay2/bacc5d4bb78fb84890f2e628a25ba01772950d6298f93abce799ea6ccaafa167/diff:/var/lib/docker/overlay2/0c23a7f22bbb1a1577e622874447b59217772d1322184866f058b6a4ee593c0f/diff:/var/lib/docker/overlay2/e69b5d
b0926c48fca036abe9031096467369444e9a8247be4a9d4e60ab8d3f59/diff:/var/lib/docker/overlay2/d5f3d88881cf71cb07a50061bb950cac2afeb9f8132ef4e5c9a16d67c0818fdc/diff:/var/lib/docker/overlay2/3bd4fab84ff9d15eab75f77ef4283da0755d5424845045488786038fbf03f213/diff:/var/lib/docker/overlay2/6393d88f777bd1f782a595e004a2f7d6650a32225d196691fe0884c1ae396ffa/diff:/var/lib/docker/overlay2/c7983a89021b05ace00f6872220a4e6af305227df2de1b4f5d82436fb94f59a9/diff:/var/lib/docker/overlay2/5fb749c964bbe3fc186ca9fa17a5505c2448e1c0a1ab5727dc45b0132354445e/diff:/var/lib/docker/overlay2/9a3daa91e271a19f83c03847aefb1b63815ba6aa6150b5700b8b91505bb88471/diff:/var/lib/docker/overlay2/b324c9cb70f4af14ef9f3c912de478d470138826674d95b4de56854729d609a1/diff:/var/lib/docker/overlay2/ad8d95b3d98fdfd627dfb8d141a822d6089a95aeb7bb350ddba19bd064f344be/diff:/var/lib/docker/overlay2/2e8292cf3d7ed7c67dea80ddd66cb9e05109c4d3c9ba81800db67b4150e91294/diff:/var/lib/docker/overlay2/6ccba9f2d78485aaead12ebf34a707c82af9172224a9b45273f12c86e0a8559d/diff:/var/lib/d
ocker/overlay2/9388ff11ba9171b0d512e7500a2e393d19b9c51f4dc181220daee728bd0452c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8bc56eb69fa82f67ac00e3ba6bccaa6b52e85749ed31ef16a68576c11034fec4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8bc56eb69fa82f67ac00e3ba6bccaa6b52e85749ed31ef16a68576c11034fec4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8bc56eb69fa82f67ac00e3ba6bccaa6b52e85749ed31ef16a68576c11034fec4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-037000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-037000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-037000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-037000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-037000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3401a68e99aff3f39c81f62fd87ab4aa77ab70794de769843333eb467601b3a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59462"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59463"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c3401a68e99a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "c8dbca4d27811c4eb22cdf9094b354bc2929d95052034c40117b4d5c6db1ab1c",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "6f94c1d740c064c2ad9e97d1a8e110ee01e0317576244f5dcb4130ae7c7f6f60",
	                    "EndpointID": "c8dbca4d27811c4eb22cdf9094b354bc2929d95052034c40117b4d5c6db1ab1c",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-037000 -n missing-upgrade-037000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-037000 -n missing-upgrade-037000: exit status 6 (392.76557ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:24:46.317717   38187 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-037000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-037000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-037000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-037000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-037000: (2.420073373s)
--- FAIL: TestMissingContainerUpgrade (72.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (81.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.981013579.exe start -p stopped-upgrade-426000 --memory=2200 --vm-driver=docker 
E0224 15:25:54.217736   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 15:26:03.186739   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.981013579.exe start -p stopped-upgrade-426000 --memory=2200 --vm-driver=docker : exit status 70 (1m9.818672194s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-426000] minikube v1.9.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig944274426
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:26:10.797892886 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-426000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:26:30.373962285 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-426000", then "minikube start -p stopped-upgrade-426000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 97.48 KiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 301.74 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 700.25 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.26 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 6.06 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 8.81 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.01 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 42.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 46.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 58.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 69.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.87 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 94.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.39 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 108.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 114.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 122.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 126.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 132.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 147.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 158.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 172.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 177.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 182.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 182.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 186.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 191.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 199.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 206.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 211.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 212.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 218.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 223.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 238.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 244.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 249.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 255.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 260.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay
2-amd64.tar.lz4: 265.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 273.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 280.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 287.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 287.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 294.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-over
lay2-amd64.tar.lz4: 305.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 307.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 326.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 342.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 342.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-o
verlay2-amd64.tar.lz4: 343.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 349.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 355.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 363.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 371.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 373.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 377.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 382.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docke
r-overlay2-amd64.tar.lz4: 388.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 393.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 413.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 417.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 420.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 422.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.25 MiB    > preloaded-images-k8s-v2-v1.18.0-do
cker-overlay2-amd64.tar.lz4: 428.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 433.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 437.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 442.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 452.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 464.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 468.09 MiB    > preloaded-images-k8s-v2-v1.18.0
-docker-overlay2-amd64.tar.lz4: 471.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 477.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 486.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 491.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 491.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 495.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 498.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.55 MiB    > preloaded-images-k8s-v2-v1.1
8.0-docker-overlay2-amd64.tar.lz4: 514.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 523.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 528.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 535.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 539.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:26:30.373962285 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.981013579.exe start -p stopped-upgrade-426000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.981013579.exe start -p stopped-upgrade-426000 --memory=2200 --vm-driver=docker : exit status 70 (4.653740038s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-426000] minikube v1.9.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2379036069
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-426000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.981013579.exe start -p stopped-upgrade-426000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.981013579.exe start -p stopped-upgrade-426000 --memory=2200 --vm-driver=docker : exit status 70 (4.348780534s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-426000] minikube v1.9.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig252750591
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-426000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (81.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-583000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-583000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m10.315228553s)

                                                
                                                
-- stdout --
	* [old-k8s-version-583000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-583000 in cluster old-k8s-version-583000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 15:39:18.772256   44716 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:39:18.772423   44716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:39:18.772428   44716 out.go:309] Setting ErrFile to fd 2...
	I0224 15:39:18.772433   44716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:39:18.772535   44716 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:39:18.773956   44716 out.go:303] Setting JSON to false
	I0224 15:39:18.792375   44716 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9532,"bootTime":1677272426,"procs":383,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 15:39:18.792471   44716 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 15:39:18.813592   44716 out.go:177] * [old-k8s-version-583000] minikube v1.29.0 on Darwin 13.2.1
	I0224 15:39:18.855633   44716 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 15:39:18.855674   44716 notify.go:220] Checking for updates...
	I0224 15:39:18.899754   44716 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:39:18.921661   44716 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 15:39:18.942514   44716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 15:39:18.963864   44716 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 15:39:18.985520   44716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 15:39:19.007465   44716 config.go:182] Loaded profile config "false-416000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:39:19.007568   44716 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 15:39:19.070019   44716 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 15:39:19.070180   44716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:39:19.212113   44716 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 23:39:19.119625107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:39:19.255636   44716 out.go:177] * Using the docker driver based on user configuration
	I0224 15:39:19.276575   44716 start.go:296] selected driver: docker
	I0224 15:39:19.276601   44716 start.go:857] validating driver "docker" against <nil>
	I0224 15:39:19.276618   44716 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 15:39:19.280932   44716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:39:19.424184   44716 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 23:39:19.331470603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:39:19.424303   44716 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 15:39:19.424481   44716 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 15:39:19.446287   44716 out.go:177] * Using Docker Desktop driver with root privileges
	I0224 15:39:19.467806   44716 cni.go:84] Creating CNI manager for ""
	I0224 15:39:19.467914   44716 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 15:39:19.467946   44716 start_flags.go:319] config:
	{Name:old-k8s-version-583000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-583000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:39:19.511966   44716 out.go:177] * Starting control plane node old-k8s-version-583000 in cluster old-k8s-version-583000
	I0224 15:39:19.532880   44716 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:39:19.553952   44716 out.go:177] * Pulling base image ...
	I0224 15:39:19.595920   44716 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 15:39:19.595976   44716 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:39:19.596016   44716 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 15:39:19.596036   44716 cache.go:57] Caching tarball of preloaded images
	I0224 15:39:19.596227   44716 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:39:19.596246   44716 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0224 15:39:19.597247   44716 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/config.json ...
	I0224 15:39:19.597417   44716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/config.json: {Name:mkd2607386d03742988084ec03dff342ad825c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:39:19.653580   44716 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:39:19.653823   44716 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:39:19.653843   44716 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:39:19.653892   44716 start.go:364] acquiring machines lock for old-k8s-version-583000: {Name:mk9aaddc56a14fcb74e6153f904a41eee9f24006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:39:19.654038   44716 start.go:368] acquired machines lock for "old-k8s-version-583000" in 134.828µs
	I0224 15:39:19.654066   44716 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-583000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-583000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 15:39:19.654149   44716 start.go:125] createHost starting for "" (driver="docker")
	I0224 15:39:19.675894   44716 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 15:39:19.676263   44716 start.go:159] libmachine.API.Create for "old-k8s-version-583000" (driver="docker")
	I0224 15:39:19.676304   44716 client.go:168] LocalClient.Create starting
	I0224 15:39:19.676551   44716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
	I0224 15:39:19.676647   44716 main.go:141] libmachine: Decoding PEM data...
	I0224 15:39:19.676683   44716 main.go:141] libmachine: Parsing certificate...
	I0224 15:39:19.676801   44716 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
	I0224 15:39:19.676870   44716 main.go:141] libmachine: Decoding PEM data...
	I0224 15:39:19.676886   44716 main.go:141] libmachine: Parsing certificate...
	I0224 15:39:19.677707   44716 cli_runner.go:164] Run: docker network inspect old-k8s-version-583000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 15:39:19.733197   44716 cli_runner.go:211] docker network inspect old-k8s-version-583000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 15:39:19.733297   44716 network_create.go:281] running [docker network inspect old-k8s-version-583000] to gather additional debugging logs...
	I0224 15:39:19.733309   44716 cli_runner.go:164] Run: docker network inspect old-k8s-version-583000
	W0224 15:39:19.788385   44716 cli_runner.go:211] docker network inspect old-k8s-version-583000 returned with exit code 1
	I0224 15:39:19.788414   44716 network_create.go:284] error running [docker network inspect old-k8s-version-583000]: docker network inspect old-k8s-version-583000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-583000
	I0224 15:39:19.788427   44716 network_create.go:286] output of [docker network inspect old-k8s-version-583000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-583000
	
	** /stderr **
	I0224 15:39:19.788507   44716 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 15:39:19.844793   44716 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 15:39:19.846208   44716 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 15:39:19.847741   44716 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 15:39:19.848050   44716 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005a81f0}
	I0224 15:39:19.848063   44716 network_create.go:123] attempt to create docker network old-k8s-version-583000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0224 15:39:19.848129   44716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-583000 old-k8s-version-583000
	I0224 15:39:19.936288   44716 network_create.go:107] docker network old-k8s-version-583000 192.168.76.0/24 created
	I0224 15:39:19.936320   44716 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-583000" container
	I0224 15:39:19.936476   44716 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 15:39:19.993330   44716 cli_runner.go:164] Run: docker volume create old-k8s-version-583000 --label name.minikube.sigs.k8s.io=old-k8s-version-583000 --label created_by.minikube.sigs.k8s.io=true
	I0224 15:39:20.049024   44716 oci.go:103] Successfully created a docker volume old-k8s-version-583000
	I0224 15:39:20.049166   44716 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-583000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-583000 --entrypoint /usr/bin/test -v old-k8s-version-583000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 15:39:20.595655   44716 oci.go:107] Successfully prepared a docker volume old-k8s-version-583000
	I0224 15:39:20.595685   44716 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 15:39:20.595699   44716 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 15:39:20.595827   44716 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-583000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 15:39:27.133253   44716 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-583000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.537323741s)
	I0224 15:39:27.133275   44716 kic.go:199] duration metric: took 6.537518 seconds to extract preloaded images to volume
	I0224 15:39:27.133393   44716 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 15:39:27.279207   44716 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-583000 --name old-k8s-version-583000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-583000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-583000 --network old-k8s-version-583000 --ip 192.168.76.2 --volume old-k8s-version-583000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 15:39:27.658004   44716 cli_runner.go:164] Run: docker container inspect old-k8s-version-583000 --format={{.State.Running}}
	I0224 15:39:27.750096   44716 cli_runner.go:164] Run: docker container inspect old-k8s-version-583000 --format={{.State.Status}}
	I0224 15:39:27.812860   44716 cli_runner.go:164] Run: docker exec old-k8s-version-583000 stat /var/lib/dpkg/alternatives/iptables
	I0224 15:39:27.926089   44716 oci.go:144] the created container "old-k8s-version-583000" has a running status.
	I0224 15:39:27.926117   44716 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa...
	I0224 15:39:27.970887   44716 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 15:39:28.086432   44716 cli_runner.go:164] Run: docker container inspect old-k8s-version-583000 --format={{.State.Status}}
	I0224 15:39:28.146617   44716 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 15:39:28.146644   44716 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-583000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 15:39:28.257236   44716 cli_runner.go:164] Run: docker container inspect old-k8s-version-583000 --format={{.State.Status}}
	I0224 15:39:28.315429   44716 machine.go:88] provisioning docker machine ...
	I0224 15:39:28.315475   44716 ubuntu.go:169] provisioning hostname "old-k8s-version-583000"
	I0224 15:39:28.315577   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:28.373234   44716 main.go:141] libmachine: Using SSH client type: native
	I0224 15:39:28.373626   44716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61528 <nil> <nil>}
	I0224 15:39:28.373643   44716 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-583000 && echo "old-k8s-version-583000" | sudo tee /etc/hostname
	I0224 15:39:28.517718   44716 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-583000
	
	I0224 15:39:28.517811   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:28.575988   44716 main.go:141] libmachine: Using SSH client type: native
	I0224 15:39:28.576352   44716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61528 <nil> <nil>}
	I0224 15:39:28.576367   44716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-583000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-583000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-583000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:39:28.711013   44716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:39:28.711038   44716 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:39:28.711057   44716 ubuntu.go:177] setting up certificates
	I0224 15:39:28.711068   44716 provision.go:83] configureAuth start
	I0224 15:39:28.711154   44716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-583000
	I0224 15:39:28.769026   44716 provision.go:138] copyHostCerts
	I0224 15:39:28.769119   44716 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:39:28.769128   44716 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:39:28.769252   44716 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:39:28.769452   44716 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:39:28.769465   44716 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:39:28.769533   44716 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:39:28.769688   44716 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:39:28.769702   44716 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:39:28.769765   44716 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:39:28.769889   44716 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-583000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-583000]
	I0224 15:39:28.909406   44716 provision.go:172] copyRemoteCerts
	I0224 15:39:28.909468   44716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:39:28.909521   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:28.967525   44716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:39:29.062713   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:39:29.080605   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0224 15:39:29.097767   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 15:39:29.115099   44716 provision.go:86] duration metric: configureAuth took 404.014024ms
	I0224 15:39:29.115113   44716 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:39:29.115270   44716 config.go:182] Loaded profile config "old-k8s-version-583000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0224 15:39:29.115338   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:29.174072   44716 main.go:141] libmachine: Using SSH client type: native
	I0224 15:39:29.174435   44716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61528 <nil> <nil>}
	I0224 15:39:29.174448   44716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:39:29.309668   44716 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:39:29.309688   44716 ubuntu.go:71] root file system type: overlay
	I0224 15:39:29.309789   44716 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:39:29.309871   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:29.369414   44716 main.go:141] libmachine: Using SSH client type: native
	I0224 15:39:29.369779   44716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61528 <nil> <nil>}
	I0224 15:39:29.369833   44716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:39:29.512976   44716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:39:29.513088   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:29.571485   44716 main.go:141] libmachine: Using SSH client type: native
	I0224 15:39:29.571846   44716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61528 <nil> <nil>}
	I0224 15:39:29.571858   44716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:39:30.234681   44716 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-24 23:39:29.510425235 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 15:39:30.234703   44716 machine.go:91] provisioned docker machine in 1.919235991s
	I0224 15:39:30.234709   44716 client.go:171] LocalClient.Create took 10.558300262s
	I0224 15:39:30.234727   44716 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-583000" took 10.558371888s
	I0224 15:39:30.234735   44716 start.go:300] post-start starting for "old-k8s-version-583000" (driver="docker")
	I0224 15:39:30.234739   44716 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:39:30.234815   44716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:39:30.234870   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:30.294036   44716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:39:30.388952   44716 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:39:30.392543   44716 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:39:30.392562   44716 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:39:30.392569   44716 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:39:30.392574   44716 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:39:30.392585   44716 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:39:30.392683   44716 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:39:30.392860   44716 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:39:30.393073   44716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:39:30.400314   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:39:30.417887   44716 start.go:303] post-start completed in 183.142612ms
	I0224 15:39:30.418429   44716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-583000
	I0224 15:39:30.477073   44716 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/config.json ...
	I0224 15:39:30.477499   44716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:39:30.477566   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:30.534859   44716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:39:30.627041   44716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:39:30.632142   44716 start.go:128] duration metric: createHost completed in 10.977881782s
	I0224 15:39:30.632167   44716 start.go:83] releasing machines lock for "old-k8s-version-583000", held for 10.978021631s
	I0224 15:39:30.632283   44716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-583000
	I0224 15:39:30.689940   44716 ssh_runner.go:195] Run: cat /version.json
	I0224 15:39:30.689944   44716 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0224 15:39:30.690017   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:30.690033   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:30.750560   44716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:39:30.751135   44716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61528 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:39:30.840458   44716 ssh_runner.go:195] Run: systemctl --version
	I0224 15:39:31.142014   44716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 15:39:31.147293   44716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 15:39:31.167610   44716 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 15:39:31.167684   44716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0224 15:39:31.181514   44716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0224 15:39:31.189371   44716 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 15:39:31.189387   44716 start.go:485] detecting cgroup driver to use...
	I0224 15:39:31.189397   44716 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:39:31.189476   44716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:39:31.203005   44716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0224 15:39:31.211553   44716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:39:31.220749   44716 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:39:31.220815   44716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:39:31.229481   44716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:39:31.238080   44716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:39:31.246795   44716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:39:31.255448   44716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:39:31.263328   44716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:39:31.271778   44716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:39:31.279382   44716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:39:31.287456   44716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:39:31.352468   44716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:39:31.420875   44716 start.go:485] detecting cgroup driver to use...
	I0224 15:39:31.420895   44716 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:39:31.420961   44716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:39:31.431589   44716 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:39:31.431647   44716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:39:31.442239   44716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:39:31.456387   44716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:39:31.552569   44716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:39:31.614043   44716 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:39:31.614063   44716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:39:31.649371   44716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:39:31.710327   44716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:39:31.955031   44716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:39:31.981177   44716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:39:32.048668   44716 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0224 15:39:32.048854   44716 cli_runner.go:164] Run: docker exec -t old-k8s-version-583000 dig +short host.docker.internal
	I0224 15:39:32.168801   44716 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:39:32.168917   44716 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:39:32.173340   44716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:39:32.183573   44716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:39:32.243184   44716 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 15:39:32.243273   44716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:39:32.264206   44716 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0224 15:39:32.264229   44716 docker.go:560] Images already preloaded, skipping extraction
	I0224 15:39:32.264301   44716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:39:32.285296   44716 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0224 15:39:32.285310   44716 cache_images.go:84] Images are preloaded, skipping loading
	I0224 15:39:32.285415   44716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:39:32.312655   44716 cni.go:84] Creating CNI manager for ""
	I0224 15:39:32.312674   44716 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 15:39:32.312689   44716 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:39:32.312715   44716 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-583000 NodeName:old-k8s-version-583000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:39:32.312836   44716 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-583000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-583000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:39:32.312914   44716 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-583000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-583000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:39:32.312981   44716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0224 15:39:32.321063   44716 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:39:32.321124   44716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 15:39:32.329008   44716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0224 15:39:32.342113   44716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:39:32.355017   44716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0224 15:39:32.368209   44716 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:39:32.372091   44716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:39:32.381972   44716 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000 for IP: 192.168.76.2
	I0224 15:39:32.381989   44716 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:39:32.382169   44716 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:39:32.382252   44716 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:39:32.382300   44716 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/client.key
	I0224 15:39:32.382317   44716 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/client.crt with IP's: []
	I0224 15:39:32.465165   44716 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/client.crt ...
	I0224 15:39:32.465177   44716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/client.crt: {Name:mk55f60cdd892f02ffc5c80be721a757a13b0afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:39:32.465493   44716 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/client.key ...
	I0224 15:39:32.465501   44716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/client.key: {Name:mkc5b509d21f9d1d254d5a041a75a47ff5e10c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:39:32.465698   44716 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.key.31bdca25
	I0224 15:39:32.465713   44716 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 15:39:32.504086   44716 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.crt.31bdca25 ...
	I0224 15:39:32.504094   44716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.crt.31bdca25: {Name:mkd92244873505dbdd962df06228125f047fd833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:39:32.504311   44716 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.key.31bdca25 ...
	I0224 15:39:32.504318   44716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.key.31bdca25: {Name:mk7eb9547414c976ecb1c411d54d68eff28e95ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:39:32.504508   44716 certs.go:333] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.crt
	I0224 15:39:32.504685   44716 certs.go:337] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.key
	I0224 15:39:32.504856   44716 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.key
	I0224 15:39:32.504871   44716 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.crt with IP's: []
	I0224 15:39:32.642231   44716 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.crt ...
	I0224 15:39:32.642245   44716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.crt: {Name:mkc6137b1a44f8754442584ad219305cb965fa48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:39:32.642531   44716 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.key ...
	I0224 15:39:32.642544   44716 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.key: {Name:mk4f693d8816763adf74ed676ae7fccabb433ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:39:32.643011   44716 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:39:32.643065   44716 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:39:32.643079   44716 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:39:32.643114   44716 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:39:32.643148   44716 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:39:32.643181   44716 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:39:32.643255   44716 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:39:32.643789   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 15:39:32.662228   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 15:39:32.679761   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 15:39:32.697184   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 15:39:32.714582   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:39:32.731914   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:39:32.749237   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:39:32.766706   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:39:32.784449   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:39:32.801861   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:39:32.820112   44716 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:39:32.837701   44716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 15:39:32.850678   44716 ssh_runner.go:195] Run: openssl version
	I0224 15:39:32.856493   44716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:39:32.865097   44716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:39:32.869086   44716 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:39:32.869130   44716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:39:32.874857   44716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:39:32.883287   44716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:39:32.891885   44716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:39:32.896312   44716 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:39:32.896364   44716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:39:32.902200   44716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:39:32.910740   44716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:39:32.918994   44716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:39:32.923047   44716 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:39:32.923098   44716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:39:32.928633   44716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:39:32.936994   44716 kubeadm.go:401] StartCluster: {Name:old-k8s-version-583000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-583000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:39:32.937101   44716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:39:32.956607   44716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 15:39:32.964669   44716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:39:32.972225   44716 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:39:32.972288   44716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:39:32.980055   44716 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:39:32.980094   44716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:39:33.029835   44716 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0224 15:39:33.030014   44716 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:39:33.200427   44716 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:39:33.200512   44716 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:39:33.200597   44716 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:39:33.354986   44716 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:39:33.355737   44716 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:39:33.362227   44716 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0224 15:39:33.426682   44716 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:39:33.448339   44716 out.go:204]   - Generating certificates and keys ...
	I0224 15:39:33.448425   44716 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:39:33.448496   44716 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:39:33.616334   44716 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 15:39:33.740341   44716 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 15:39:33.828924   44716 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 15:39:33.957212   44716 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 15:39:34.006794   44716 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 15:39:34.006916   44716 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-583000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0224 15:39:34.055490   44716 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 15:39:34.055736   44716 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-583000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0224 15:39:34.197316   44716 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 15:39:34.285624   44716 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 15:39:34.453801   44716 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 15:39:34.454122   44716 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:39:34.542554   44716 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:39:34.661938   44716 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:39:34.737368   44716 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:39:34.811451   44716 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:39:34.812329   44716 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:39:34.835745   44716 out.go:204]   - Booting up control plane ...
	I0224 15:39:34.835918   44716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:39:34.836009   44716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:39:34.836099   44716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:39:34.836176   44716 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:39:34.836334   44716 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:40:14.821218   44716 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0224 15:40:14.821407   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:40:14.821766   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:40:19.823167   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:40:19.823381   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:40:29.824079   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:40:29.824239   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:40:49.826193   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:40:49.826420   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:41:29.828374   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:41:29.828588   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:41:29.828602   44716 kubeadm.go:322] 
	I0224 15:41:29.828639   44716 kubeadm.go:322] Unfortunately, an error has occurred:
	I0224 15:41:29.828687   44716 kubeadm.go:322] 	timed out waiting for the condition
	I0224 15:41:29.828702   44716 kubeadm.go:322] 
	I0224 15:41:29.828734   44716 kubeadm.go:322] This error is likely caused by:
	I0224 15:41:29.828767   44716 kubeadm.go:322] 	- The kubelet is not running
	I0224 15:41:29.828875   44716 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 15:41:29.828887   44716 kubeadm.go:322] 
	I0224 15:41:29.828995   44716 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 15:41:29.829040   44716 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0224 15:41:29.829076   44716 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0224 15:41:29.829082   44716 kubeadm.go:322] 
	I0224 15:41:29.829255   44716 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 15:41:29.829361   44716 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0224 15:41:29.829442   44716 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0224 15:41:29.829500   44716 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0224 15:41:29.829595   44716 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0224 15:41:29.829631   44716 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0224 15:41:29.832036   44716 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 15:41:29.832099   44716 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 15:41:29.832202   44716 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0224 15:41:29.832315   44716 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:41:29.832384   44716 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 15:41:29.832451   44716 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0224 15:41:29.832647   44716 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-583000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-583000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-583000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-583000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 15:41:29.832688   44716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0224 15:41:30.243600   44716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:41:30.253549   44716 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:41:30.253604   44716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:41:30.261256   44716 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:41:30.261281   44716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:41:30.311331   44716 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0224 15:41:30.311375   44716 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:41:30.480631   44716 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:41:30.480740   44716 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:41:30.480840   44716 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:41:30.634337   44716 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:41:30.634902   44716 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:41:30.641531   44716 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0224 15:41:30.707719   44716 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:41:30.729352   44716 out.go:204]   - Generating certificates and keys ...
	I0224 15:41:30.729433   44716 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:41:30.729501   44716 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:41:30.729586   44716 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 15:41:30.729655   44716 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0224 15:41:30.729727   44716 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 15:41:30.729770   44716 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0224 15:41:30.729815   44716 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0224 15:41:30.729861   44716 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0224 15:41:30.729926   44716 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 15:41:30.730019   44716 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 15:41:30.730062   44716 kubeadm.go:322] [certs] Using the existing "sa" key
	I0224 15:41:30.730140   44716 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:41:30.766967   44716 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:41:31.076115   44716 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:41:31.384960   44716 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:41:31.516906   44716 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:41:31.517912   44716 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:41:31.539561   44716 out.go:204]   - Booting up control plane ...
	I0224 15:41:31.539729   44716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:41:31.539879   44716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:41:31.540011   44716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:41:31.540121   44716 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:41:31.540380   44716 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:42:11.527410   44716 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0224 15:42:11.528560   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:42:11.528805   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:42:16.530407   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:42:16.530627   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:42:26.532359   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:42:26.532553   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:42:46.534480   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:42:46.534691   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:43:26.536979   44716 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:43:26.537207   44716 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:43:26.537220   44716 kubeadm.go:322] 
	I0224 15:43:26.537306   44716 kubeadm.go:322] Unfortunately, an error has occurred:
	I0224 15:43:26.537361   44716 kubeadm.go:322] 	timed out waiting for the condition
	I0224 15:43:26.537372   44716 kubeadm.go:322] 
	I0224 15:43:26.537413   44716 kubeadm.go:322] This error is likely caused by:
	I0224 15:43:26.537447   44716 kubeadm.go:322] 	- The kubelet is not running
	I0224 15:43:26.537565   44716 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 15:43:26.537576   44716 kubeadm.go:322] 
	I0224 15:43:26.537731   44716 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 15:43:26.537788   44716 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0224 15:43:26.537841   44716 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0224 15:43:26.537851   44716 kubeadm.go:322] 
	I0224 15:43:26.537957   44716 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 15:43:26.538053   44716 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0224 15:43:26.538144   44716 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0224 15:43:26.538209   44716 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0224 15:43:26.538312   44716 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0224 15:43:26.538352   44716 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0224 15:43:26.541202   44716 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 15:43:26.541287   44716 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 15:43:26.541378   44716 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0224 15:43:26.541451   44716 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:43:26.541527   44716 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 15:43:26.541592   44716 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0224 15:43:26.541638   44716 kubeadm.go:403] StartCluster complete in 3m53.602558278s
	I0224 15:43:26.541732   44716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:43:26.560565   44716 logs.go:277] 0 containers: []
	W0224 15:43:26.560579   44716 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:43:26.560650   44716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:43:26.580329   44716 logs.go:277] 0 containers: []
	W0224 15:43:26.580344   44716 logs.go:279] No container was found matching "etcd"
	I0224 15:43:26.580415   44716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:43:26.599128   44716 logs.go:277] 0 containers: []
	W0224 15:43:26.599142   44716 logs.go:279] No container was found matching "coredns"
	I0224 15:43:26.599208   44716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:43:26.618127   44716 logs.go:277] 0 containers: []
	W0224 15:43:26.618141   44716 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:43:26.618222   44716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:43:26.636831   44716 logs.go:277] 0 containers: []
	W0224 15:43:26.636844   44716 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:43:26.636916   44716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:43:26.656679   44716 logs.go:277] 0 containers: []
	W0224 15:43:26.656692   44716 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:43:26.656760   44716 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:43:26.675681   44716 logs.go:277] 0 containers: []
	W0224 15:43:26.675693   44716 logs.go:279] No container was found matching "kindnet"
	I0224 15:43:26.675700   44716 logs.go:123] Gathering logs for kubelet ...
	I0224 15:43:26.675710   44716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:43:26.714428   44716 logs.go:123] Gathering logs for dmesg ...
	I0224 15:43:26.714446   44716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:43:26.728213   44716 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:43:26.728231   44716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:43:26.782923   44716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:43:26.782936   44716 logs.go:123] Gathering logs for Docker ...
	I0224 15:43:26.782943   44716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:43:26.807488   44716 logs.go:123] Gathering logs for container status ...
	I0224 15:43:26.807503   44716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:43:28.854675   44716 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047141148s)
	W0224 15:43:28.854822   44716 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 15:43:28.854846   44716 out.go:239] * 
	* 
	W0224 15:43:28.855038   44716 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 15:43:28.855075   44716 out.go:239] * 
	* 
	W0224 15:43:28.855750   44716 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 15:43:28.917693   44716 out.go:177] 
	W0224 15:43:28.959846   44716 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 15:43:28.960024   44716 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 15:43:28.960087   44716 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 15:43:29.001471   44716 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-583000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-583000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-583000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa",
	        "Created": "2023-02-24T23:39:27.334833203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 656762,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:39:27.64687431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hosts",
	        "LogPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa-json.log",
	        "Name": "/old-k8s-version-583000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-583000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-583000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-583000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-583000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-583000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccbbf2decf41649f41df74885d1d793fc99a23332ed7cbdace844e27ae8d2f97",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61526"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61527"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ccbbf2decf41",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-583000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9cf258058c71",
	                        "old-k8s-version-583000"
	                    ],
	                    "NetworkID": "2385c01da446b4577899b64cfb7c6c9559167bd3e5ad2b3a0a47d05890f119a8",
	                    "EndpointID": "3462dd1dea9f8bb290104351b25276e0a1c016889fe2607476bc6b38a54327d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 6 (398.301515ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:43:29.526945   45881 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-583000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-583000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-583000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-583000 create -f testdata/busybox.yaml: exit status 1 (34.670905ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-583000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-583000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-583000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-583000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa",
	        "Created": "2023-02-24T23:39:27.334833203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 656762,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:39:27.64687431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hosts",
	        "LogPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa-json.log",
	        "Name": "/old-k8s-version-583000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-583000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-583000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-583000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-583000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-583000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccbbf2decf41649f41df74885d1d793fc99a23332ed7cbdace844e27ae8d2f97",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61526"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61527"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ccbbf2decf41",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-583000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9cf258058c71",
	                        "old-k8s-version-583000"
	                    ],
	                    "NetworkID": "2385c01da446b4577899b64cfb7c6c9559167bd3e5ad2b3a0a47d05890f119a8",
	                    "EndpointID": "3462dd1dea9f8bb290104351b25276e0a1c016889fe2607476bc6b38a54327d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 6 (458.359818ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:43:30.079864   45894 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-583000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-583000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-583000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-583000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa",
	        "Created": "2023-02-24T23:39:27.334833203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 656762,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:39:27.64687431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hosts",
	        "LogPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa-json.log",
	        "Name": "/old-k8s-version-583000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-583000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-583000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-583000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-583000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-583000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccbbf2decf41649f41df74885d1d793fc99a23332ed7cbdace844e27ae8d2f97",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61526"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61527"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ccbbf2decf41",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-583000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9cf258058c71",
	                        "old-k8s-version-583000"
	                    ],
	                    "NetworkID": "2385c01da446b4577899b64cfb7c6c9559167bd3e5ad2b3a0a47d05890f119a8",
	                    "EndpointID": "3462dd1dea9f8bb290104351b25276e0a1c016889fe2607476bc6b38a54327d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000
E0224 15:43:30.350771   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:30.355877   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:30.366728   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:30.387321   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:30.427437   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:30.508804   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 6 (400.053117ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:43:30.537097   45906 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-583000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-583000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-583000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0224 15:43:30.670966   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:30.991346   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:31.633633   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:32.914404   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:35.476700   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:40.599008   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:50.839306   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:43:51.408079   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:44:08.235621   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:44:11.321015   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:44:19.129400   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:44:20.884244   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:20.890669   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:20.902859   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:20.923348   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:20.963960   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:21.044204   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:21.204351   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:21.524717   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:22.164940   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:23.446114   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:24.736224   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:44:26.006342   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:31.127026   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:41.368752   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:44:52.281538   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:44:52.480893   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:44:55.136114   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:45:01.849144   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-583000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m33.883950335s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-583000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-583000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-583000 describe deploy/metrics-server -n kube-system: exit status 1 (36.380716ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-583000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-583000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-583000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-583000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa",
	        "Created": "2023-02-24T23:39:27.334833203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 656762,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:39:27.64687431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hosts",
	        "LogPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa-json.log",
	        "Name": "/old-k8s-version-583000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-583000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-583000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-583000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-583000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-583000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ccbbf2decf41649f41df74885d1d793fc99a23332ed7cbdace844e27ae8d2f97",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61530"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61526"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61527"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ccbbf2decf41",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-583000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9cf258058c71",
	                        "old-k8s-version-583000"
	                    ],
	                    "NetworkID": "2385c01da446b4577899b64cfb7c6c9559167bd3e5ad2b3a0a47d05890f119a8",
	                    "EndpointID": "3462dd1dea9f8bb290104351b25276e0a1c016889fe2607476bc6b38a54327d3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 6 (398.195081ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:45:04.918912   46011 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-583000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-583000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (497.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-583000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0224 15:45:10.425513   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 15:45:22.822997   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:45:30.156858   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:45:37.237805   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 15:45:42.811687   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:45:54.184853   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 15:46:01.141672   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:46:07.515074   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:46:14.202613   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:46:35.249709   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:46:35.284055   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:47:02.971402   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:47:04.734559   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:47:41.413894   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:47:46.309479   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:48:13.998714   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:48:19.308272   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:48:30.354643   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:48:58.045025   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:49:20.888156   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:49:24.737995   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:49:48.578323   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:49:55.139131   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:50:10.426577   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 15:50:54.188964   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 15:51:01.145412   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:51:07.517682   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-583000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m12.5665895s)

                                                
                                                
-- stdout --
	* [old-k8s-version-583000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-583000 in cluster old-k8s-version-583000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-583000" ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 15:45:06.948428   46043 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:45:06.948613   46043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:45:06.948618   46043 out.go:309] Setting ErrFile to fd 2...
	I0224 15:45:06.948622   46043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:45:06.948731   46043 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:45:06.950082   46043 out.go:303] Setting JSON to false
	I0224 15:45:06.968351   46043 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9880,"bootTime":1677272426,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 15:45:06.968430   46043 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 15:45:06.990550   46043 out.go:177] * [old-k8s-version-583000] minikube v1.29.0 on Darwin 13.2.1
	I0224 15:45:07.012529   46043 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 15:45:07.012537   46043 notify.go:220] Checking for updates...
	I0224 15:45:07.056273   46043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:45:07.077347   46043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 15:45:07.098391   46043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 15:45:07.120499   46043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 15:45:07.142298   46043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 15:45:07.163998   46043 config.go:182] Loaded profile config "old-k8s-version-583000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0224 15:45:07.186242   46043 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0224 15:45:07.207307   46043 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 15:45:07.270159   46043 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 15:45:07.270261   46043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:45:07.413043   46043 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 23:45:07.319846868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:45:07.435082   46043 out.go:177] * Using the docker driver based on existing profile
	I0224 15:45:07.456682   46043 start.go:296] selected driver: docker
	I0224 15:45:07.456710   46043 start.go:857] validating driver "docker" against &{Name:old-k8s-version-583000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-583000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:45:07.456837   46043 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 15:45:07.460378   46043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:45:07.610519   46043 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 23:45:07.515963803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:45:07.610669   46043 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 15:45:07.610692   46043 cni.go:84] Creating CNI manager for ""
	I0224 15:45:07.610704   46043 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 15:45:07.610713   46043 start_flags.go:319] config:
	{Name:old-k8s-version-583000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-583000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:45:07.654200   46043 out.go:177] * Starting control plane node old-k8s-version-583000 in cluster old-k8s-version-583000
	I0224 15:45:07.696433   46043 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:45:07.718316   46043 out.go:177] * Pulling base image ...
	I0224 15:45:07.762320   46043 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 15:45:07.762418   46043 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 15:45:07.762417   46043 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:45:07.762438   46043 cache.go:57] Caching tarball of preloaded images
	I0224 15:45:07.762667   46043 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:45:07.762685   46043 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0224 15:45:07.763471   46043 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/config.json ...
	I0224 15:45:07.819972   46043 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:45:07.819991   46043 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:45:07.820015   46043 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:45:07.820056   46043 start.go:364] acquiring machines lock for old-k8s-version-583000: {Name:mk9aaddc56a14fcb74e6153f904a41eee9f24006 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:45:07.820145   46043 start.go:368] acquired machines lock for "old-k8s-version-583000" in 72.266µs
	I0224 15:45:07.820172   46043 start.go:96] Skipping create...Using existing machine configuration
	I0224 15:45:07.820181   46043 fix.go:55] fixHost starting: 
	I0224 15:45:07.820432   46043 cli_runner.go:164] Run: docker container inspect old-k8s-version-583000 --format={{.State.Status}}
	I0224 15:45:07.877861   46043 fix.go:103] recreateIfNeeded on old-k8s-version-583000: state=Stopped err=<nil>
	W0224 15:45:07.877910   46043 fix.go:129] unexpected machine state, will restart: <nil>
	I0224 15:45:07.899782   46043 out.go:177] * Restarting existing docker container for "old-k8s-version-583000" ...
	I0224 15:45:07.943541   46043 cli_runner.go:164] Run: docker start old-k8s-version-583000
	I0224 15:45:08.294545   46043 cli_runner.go:164] Run: docker container inspect old-k8s-version-583000 --format={{.State.Status}}
	I0224 15:45:08.356220   46043 kic.go:426] container "old-k8s-version-583000" state is running.
	I0224 15:45:08.356930   46043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-583000
	I0224 15:45:08.419694   46043 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/config.json ...
	I0224 15:45:08.420089   46043 machine.go:88] provisioning docker machine ...
	I0224 15:45:08.420115   46043 ubuntu.go:169] provisioning hostname "old-k8s-version-583000"
	I0224 15:45:08.420189   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:08.487324   46043 main.go:141] libmachine: Using SSH client type: native
	I0224 15:45:08.487783   46043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61760 <nil> <nil>}
	I0224 15:45:08.487801   46043 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-583000 && echo "old-k8s-version-583000" | sudo tee /etc/hostname
	I0224 15:45:08.642131   46043 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-583000
	
	I0224 15:45:08.642222   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:08.703443   46043 main.go:141] libmachine: Using SSH client type: native
	I0224 15:45:08.703796   46043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61760 <nil> <nil>}
	I0224 15:45:08.703808   46043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-583000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-583000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-583000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:45:08.838595   46043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:45:08.838613   46043 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:45:08.838629   46043 ubuntu.go:177] setting up certificates
	I0224 15:45:08.838637   46043 provision.go:83] configureAuth start
	I0224 15:45:08.838707   46043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-583000
	I0224 15:45:08.896030   46043 provision.go:138] copyHostCerts
	I0224 15:45:08.896124   46043 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:45:08.896140   46043 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:45:08.896238   46043 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:45:08.896441   46043 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:45:08.896447   46043 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:45:08.896518   46043 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:45:08.896669   46043 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:45:08.896674   46043 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:45:08.896733   46043 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:45:08.896859   46043 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-583000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-583000]
	I0224 15:45:08.986618   46043 provision.go:172] copyRemoteCerts
	I0224 15:45:08.986676   46043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:45:08.986729   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:09.044543   46043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61760 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:45:09.139708   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0224 15:45:09.158514   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 15:45:09.175971   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:45:09.193768   46043 provision.go:86] duration metric: configureAuth took 355.109198ms
	I0224 15:45:09.193785   46043 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:45:09.193931   46043 config.go:182] Loaded profile config "old-k8s-version-583000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0224 15:45:09.194002   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:09.252649   46043 main.go:141] libmachine: Using SSH client type: native
	I0224 15:45:09.253003   46043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61760 <nil> <nil>}
	I0224 15:45:09.253013   46043 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:45:09.389090   46043 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:45:09.389106   46043 ubuntu.go:71] root file system type: overlay
	I0224 15:45:09.389212   46043 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:45:09.389302   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:09.446799   46043 main.go:141] libmachine: Using SSH client type: native
	I0224 15:45:09.447157   46043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61760 <nil> <nil>}
	I0224 15:45:09.447207   46043 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:45:09.589936   46043 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:45:09.590035   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:09.648759   46043 main.go:141] libmachine: Using SSH client type: native
	I0224 15:45:09.649115   46043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61760 <nil> <nil>}
	I0224 15:45:09.649128   46043 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:45:09.788137   46043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:45:09.788154   46043 machine.go:91] provisioned docker machine in 1.368045109s
	I0224 15:45:09.788164   46043 start.go:300] post-start starting for "old-k8s-version-583000" (driver="docker")
	I0224 15:45:09.788169   46043 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:45:09.788251   46043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:45:09.788303   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:09.846016   46043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61760 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:45:09.941189   46043 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:45:09.944855   46043 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:45:09.944877   46043 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:45:09.944884   46043 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:45:09.944888   46043 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:45:09.944895   46043 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:45:09.944984   46043 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:45:09.945154   46043 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:45:09.945323   46043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:45:09.952811   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:45:09.970046   46043 start.go:303] post-start completed in 181.871581ms
	I0224 15:45:09.970138   46043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:45:09.970200   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:10.084487   46043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61760 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:45:10.176965   46043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:45:10.181500   46043 fix.go:57] fixHost completed within 2.361293187s
	I0224 15:45:10.181518   46043 start.go:83] releasing machines lock for "old-k8s-version-583000", held for 2.36134551s
	I0224 15:45:10.181611   46043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-583000
	I0224 15:45:10.239117   46043 ssh_runner.go:195] Run: cat /version.json
	I0224 15:45:10.239156   46043 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0224 15:45:10.239189   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:10.239223   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:10.303892   46043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61760 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:45:10.304640   46043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61760 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/old-k8s-version-583000/id_rsa Username:docker}
	I0224 15:45:10.697322   46043 ssh_runner.go:195] Run: systemctl --version
	I0224 15:45:10.702607   46043 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0224 15:45:10.707258   46043 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0224 15:45:10.707334   46043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0224 15:45:10.714868   46043 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0224 15:45:10.722440   46043 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0224 15:45:10.722454   46043 start.go:485] detecting cgroup driver to use...
	I0224 15:45:10.722464   46043 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:45:10.722557   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:45:10.735807   46043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0224 15:45:10.744286   46043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:45:10.752721   46043 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:45:10.752777   46043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:45:10.761406   46043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:45:10.770373   46043 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:45:10.779053   46043 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:45:10.787868   46043 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:45:10.796154   46043 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:45:10.804627   46043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:45:10.811954   46043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:45:10.819136   46043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:45:10.884878   46043 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:45:10.954587   46043 start.go:485] detecting cgroup driver to use...
	I0224 15:45:10.954606   46043 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:45:10.954674   46043 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:45:10.965261   46043 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:45:10.965329   46043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:45:10.976094   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:45:10.990464   46043 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:45:11.084005   46043 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:45:11.187876   46043 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:45:11.187895   46043 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:45:11.202823   46043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:45:11.289830   46043 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:45:11.508919   46043 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:45:11.536625   46043 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:45:11.586214   46043 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0224 15:45:11.586381   46043 cli_runner.go:164] Run: docker exec -t old-k8s-version-583000 dig +short host.docker.internal
	I0224 15:45:11.698589   46043 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:45:11.698688   46043 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:45:11.703222   46043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:45:11.713435   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:11.771063   46043 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 15:45:11.771138   46043 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:45:11.791478   46043 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0224 15:45:11.791494   46043 docker.go:560] Images already preloaded, skipping extraction
	I0224 15:45:11.791576   46043 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:45:11.812763   46043 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0224 15:45:11.812782   46043 cache_images.go:84] Images are preloaded, skipping loading
	I0224 15:45:11.812868   46043 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:45:11.840749   46043 cni.go:84] Creating CNI manager for ""
	I0224 15:45:11.840767   46043 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 15:45:11.840788   46043 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:45:11.840812   46043 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-583000 NodeName:old-k8s-version-583000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:45:11.840937   46043 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-583000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-583000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:45:11.841063   46043 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-583000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-583000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:45:11.841129   46043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0224 15:45:11.849915   46043 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:45:11.849999   46043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 15:45:11.857603   46043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0224 15:45:11.870349   46043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:45:11.883748   46043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0224 15:45:11.897022   46043 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:45:11.901089   46043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:45:11.911282   46043 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000 for IP: 192.168.76.2
	I0224 15:45:11.911300   46043 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:45:11.911459   46043 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:45:11.911510   46043 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:45:11.911613   46043 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/client.key
	I0224 15:45:11.911685   46043 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.key.31bdca25
	I0224 15:45:11.911762   46043 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.key
	I0224 15:45:11.911968   46043 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:45:11.912010   46043 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:45:11.912021   46043 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:45:11.912058   46043 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:45:11.912093   46043 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:45:11.912127   46043 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:45:11.912194   46043 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:45:11.912772   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 15:45:11.930699   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 15:45:11.948307   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 15:45:11.966246   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/old-k8s-version-583000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 15:45:11.987691   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:45:12.005464   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:45:12.023375   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:45:12.040902   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:45:12.058215   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:45:12.075812   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:45:12.093329   46043 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:45:12.110949   46043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 15:45:12.124282   46043 ssh_runner.go:195] Run: openssl version
	I0224 15:45:12.129889   46043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:45:12.138105   46043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:45:12.142194   46043 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:45:12.142246   46043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:45:12.147979   46043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:45:12.155676   46043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:45:12.164695   46043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:45:12.168943   46043 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:45:12.168990   46043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:45:12.174330   46043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:45:12.182281   46043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:45:12.190575   46043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:45:12.194647   46043 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:45:12.194694   46043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:45:12.200391   46043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:45:12.208381   46043 kubeadm.go:401] StartCluster: {Name:old-k8s-version-583000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-583000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:45:12.208490   46043 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:45:12.228393   46043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 15:45:12.236296   46043 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0224 15:45:12.236313   46043 kubeadm.go:633] restartCluster start
	I0224 15:45:12.236370   46043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 15:45:12.243654   46043 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:12.243720   46043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-583000
	I0224 15:45:12.303838   46043 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-583000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:45:12.304030   46043 kubeconfig.go:146] "old-k8s-version-583000" context is missing from /Users/jenkins/minikube-integration/15909-26406/kubeconfig - will repair!
	I0224 15:45:12.304361   46043 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/kubeconfig: {Name:mk1182fefc6ba3b7ea2d2356c47127703005a4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:45:12.305645   46043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 15:45:12.313919   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:12.313973   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:12.322835   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:12.824536   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:12.824703   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:12.835714   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:13.323295   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:13.323403   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:13.334595   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:13.823402   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:13.823492   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:13.832961   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:14.323108   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:14.323182   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:14.332874   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:14.823467   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:14.823578   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:14.834664   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:15.323071   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:15.323205   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:15.333816   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:15.823254   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:15.823372   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:15.834409   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:16.323894   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:16.324026   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:16.334486   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:16.823950   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:16.824112   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:16.835425   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:17.324614   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:17.324816   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:17.335670   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:17.824429   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:17.824625   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:17.835730   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:18.323438   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:18.323545   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:18.333207   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:18.823230   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:18.823413   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:18.833103   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:19.324370   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:19.324486   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:19.335732   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:19.824132   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:19.824262   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:19.835679   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:20.323429   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:20.323510   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:20.332917   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:20.824995   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:20.825102   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:20.836238   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:21.325016   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:21.325209   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:21.336467   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:21.825065   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:21.825202   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:21.836238   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:22.323125   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:22.323257   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:22.334103   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:22.334114   46043 api_server.go:165] Checking apiserver status ...
	I0224 15:45:22.334170   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:45:22.343182   46043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:45:22.343195   46043 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0224 15:45:22.343204   46043 kubeadm.go:1120] stopping kube-system containers ...
	I0224 15:45:22.343293   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:45:22.363076   46043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 15:45:22.373860   46043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:45:22.381964   46043 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Feb 24 23:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb 24 23:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb 24 23:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb 24 23:41 /etc/kubernetes/scheduler.conf
	
	I0224 15:45:22.382031   46043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 15:45:22.389794   46043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 15:45:22.397291   46043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 15:45:22.404868   46043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 15:45:22.412571   46043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:45:22.420608   46043 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0224 15:45:22.420622   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:45:22.473080   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:45:23.162383   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:45:23.323690   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:45:23.382692   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:45:23.456304   46043 api_server.go:51] waiting for apiserver process to appear ...
	I0224 15:45:23.456378   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:23.965404   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:24.465438   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:24.965428   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:25.465629   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:25.965508   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:26.465596   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:26.965633   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:27.465903   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:27.965621   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:28.466137   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:28.965892   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:29.465513   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:29.965554   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:30.465572   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:30.965932   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:31.465699   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:31.966726   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:32.465875   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:32.965653   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:33.466805   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:33.965558   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:34.466331   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:34.965726   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:35.465542   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:35.965653   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:36.466294   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:36.965554   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:37.465640   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:37.966706   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:38.465733   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:38.965600   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:39.465568   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:39.965654   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:40.465679   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:40.966439   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:41.465593   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:41.965706   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:42.466192   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:42.965615   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:43.465965   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:43.967030   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:44.465614   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:44.966111   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:45.465871   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:45.966095   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:46.466096   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:46.966752   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:47.466697   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:47.965855   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:48.465797   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:48.965680   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:49.465951   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:49.965850   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:50.465916   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:50.965726   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:51.465765   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:51.965885   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:52.465820   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:52.965769   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:53.466159   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:53.965770   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:54.466266   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:54.965860   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:55.465843   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:55.966037   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:56.465734   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:56.966304   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:57.466053   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:57.966486   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:58.465826   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:58.965797   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:59.465870   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:45:59.965921   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:00.466895   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:00.965835   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:01.466005   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:01.965905   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:02.465899   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:02.965820   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:03.465827   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:03.965785   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:04.465928   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:04.966238   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:05.466100   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:05.965912   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:06.466055   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:06.965768   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:07.466326   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:07.966121   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:08.466305   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:08.965888   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:09.465923   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:09.965790   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:10.467450   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:10.965870   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:11.465835   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:11.965861   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:12.466226   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:12.965907   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:13.465947   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:13.966872   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:14.466880   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:14.967871   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:15.466854   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:15.966881   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:16.466988   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:16.966503   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:17.466902   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:17.966886   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:18.466936   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:18.966952   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:19.467476   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:19.966911   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:20.467106   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:20.966992   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:21.467047   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:21.966976   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:22.466968   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:22.966927   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:23.467003   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:46:23.487505   46043 logs.go:277] 0 containers: []
	W0224 15:46:23.487521   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:46:23.487609   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:46:23.507861   46043 logs.go:277] 0 containers: []
	W0224 15:46:23.507875   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:46:23.507946   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:46:23.528519   46043 logs.go:277] 0 containers: []
	W0224 15:46:23.528537   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:46:23.528628   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:46:23.549895   46043 logs.go:277] 0 containers: []
	W0224 15:46:23.549909   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:46:23.549988   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:46:23.569655   46043 logs.go:277] 0 containers: []
	W0224 15:46:23.569669   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:46:23.569744   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:46:23.592243   46043 logs.go:277] 0 containers: []
	W0224 15:46:23.592258   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:46:23.592365   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:46:23.612451   46043 logs.go:277] 0 containers: []
	W0224 15:46:23.612463   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:46:23.612529   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:46:23.631782   46043 logs.go:277] 0 containers: []
	W0224 15:46:23.631796   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:46:23.631803   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:46:23.631813   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:46:23.685328   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:46:23.685342   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:46:23.685349   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:46:23.707763   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:46:23.707780   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:46:25.750826   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043014989s)
	I0224 15:46:25.750947   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:46:25.750956   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:46:25.790357   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:46:25.790381   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:46:28.303911   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:28.468149   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:46:28.489574   46043 logs.go:277] 0 containers: []
	W0224 15:46:28.489590   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:46:28.489675   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:46:28.507722   46043 logs.go:277] 0 containers: []
	W0224 15:46:28.507738   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:46:28.507818   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:46:28.527489   46043 logs.go:277] 0 containers: []
	W0224 15:46:28.527503   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:46:28.527578   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:46:28.546721   46043 logs.go:277] 0 containers: []
	W0224 15:46:28.546734   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:46:28.546808   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:46:28.566856   46043 logs.go:277] 0 containers: []
	W0224 15:46:28.566870   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:46:28.566941   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:46:28.585966   46043 logs.go:277] 0 containers: []
	W0224 15:46:28.585979   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:46:28.586046   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:46:28.606405   46043 logs.go:277] 0 containers: []
	W0224 15:46:28.606418   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:46:28.606486   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:46:28.626169   46043 logs.go:277] 0 containers: []
	W0224 15:46:28.626181   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:46:28.626188   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:46:28.626195   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:46:28.667475   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:46:28.667491   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:46:28.680242   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:46:28.680255   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:46:28.736147   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:46:28.736164   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:46:28.736173   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:46:28.760882   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:46:28.760902   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:46:30.810949   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050014362s)
	I0224 15:46:33.311476   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:33.467412   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:46:33.489349   46043 logs.go:277] 0 containers: []
	W0224 15:46:33.489363   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:46:33.489431   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:46:33.512035   46043 logs.go:277] 0 containers: []
	W0224 15:46:33.512054   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:46:33.512170   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:46:33.532222   46043 logs.go:277] 0 containers: []
	W0224 15:46:33.532237   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:46:33.532310   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:46:33.551335   46043 logs.go:277] 0 containers: []
	W0224 15:46:33.551348   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:46:33.551418   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:46:33.571349   46043 logs.go:277] 0 containers: []
	W0224 15:46:33.571362   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:46:33.571430   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:46:33.591563   46043 logs.go:277] 0 containers: []
	W0224 15:46:33.591576   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:46:33.591651   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:46:33.610853   46043 logs.go:277] 0 containers: []
	W0224 15:46:33.610867   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:46:33.610936   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:46:33.629901   46043 logs.go:277] 0 containers: []
	W0224 15:46:33.629915   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:46:33.629923   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:46:33.629930   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:46:33.667571   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:46:33.667585   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:46:33.679608   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:46:33.679621   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:46:33.732942   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:46:33.732953   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:46:33.732960   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:46:33.754936   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:46:33.754949   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:46:35.807247   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05226849s)
	I0224 15:46:38.309086   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:38.467140   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:46:38.487499   46043 logs.go:277] 0 containers: []
	W0224 15:46:38.487511   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:46:38.487580   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:46:38.508830   46043 logs.go:277] 0 containers: []
	W0224 15:46:38.508845   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:46:38.508941   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:46:38.532291   46043 logs.go:277] 0 containers: []
	W0224 15:46:38.532306   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:46:38.532385   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:46:38.557421   46043 logs.go:277] 0 containers: []
	W0224 15:46:38.557435   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:46:38.557517   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:46:38.579296   46043 logs.go:277] 0 containers: []
	W0224 15:46:38.579309   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:46:38.579381   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:46:38.598285   46043 logs.go:277] 0 containers: []
	W0224 15:46:38.598299   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:46:38.598374   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:46:38.618421   46043 logs.go:277] 0 containers: []
	W0224 15:46:38.618435   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:46:38.618502   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:46:38.638680   46043 logs.go:277] 0 containers: []
	W0224 15:46:38.638692   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:46:38.638702   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:46:38.638709   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:46:38.680083   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:46:38.680098   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:46:38.692676   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:46:38.692713   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:46:38.749060   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:46:38.749072   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:46:38.749079   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:46:38.770391   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:46:38.770406   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:46:40.816295   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045859765s)
	I0224 15:46:43.317223   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:43.467446   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:46:43.488015   46043 logs.go:277] 0 containers: []
	W0224 15:46:43.488028   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:46:43.488109   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:46:43.508212   46043 logs.go:277] 0 containers: []
	W0224 15:46:43.508225   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:46:43.508296   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:46:43.528917   46043 logs.go:277] 0 containers: []
	W0224 15:46:43.528934   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:46:43.529002   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:46:43.549417   46043 logs.go:277] 0 containers: []
	W0224 15:46:43.549433   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:46:43.549534   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:46:43.585130   46043 logs.go:277] 0 containers: []
	W0224 15:46:43.585148   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:46:43.585218   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:46:43.606501   46043 logs.go:277] 0 containers: []
	W0224 15:46:43.606515   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:46:43.606586   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:46:43.628801   46043 logs.go:277] 0 containers: []
	W0224 15:46:43.628814   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:46:43.628891   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:46:43.650981   46043 logs.go:277] 0 containers: []
	W0224 15:46:43.650995   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:46:43.651003   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:46:43.651011   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:46:43.713661   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:46:43.713679   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:46:43.713693   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:46:43.742970   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:46:43.742988   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:46:45.792886   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049867449s)
	I0224 15:46:45.792999   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:46:45.793008   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:46:45.832356   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:46:45.832370   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:46:48.345248   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:48.467292   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:46:48.488286   46043 logs.go:277] 0 containers: []
	W0224 15:46:48.488300   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:46:48.488367   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:46:48.508089   46043 logs.go:277] 0 containers: []
	W0224 15:46:48.508102   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:46:48.508168   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:46:48.527757   46043 logs.go:277] 0 containers: []
	W0224 15:46:48.527770   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:46:48.527839   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:46:48.548301   46043 logs.go:277] 0 containers: []
	W0224 15:46:48.548314   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:46:48.548382   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:46:48.567578   46043 logs.go:277] 0 containers: []
	W0224 15:46:48.567592   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:46:48.567662   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:46:48.587602   46043 logs.go:277] 0 containers: []
	W0224 15:46:48.587615   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:46:48.587700   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:46:48.609526   46043 logs.go:277] 0 containers: []
	W0224 15:46:48.609540   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:46:48.609612   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:46:48.630929   46043 logs.go:277] 0 containers: []
	W0224 15:46:48.630949   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:46:48.630960   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:46:48.630973   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:46:50.680006   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048993186s)
	I0224 15:46:50.680215   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:46:50.680226   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:46:50.723688   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:46:50.723706   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:46:50.736691   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:46:50.736705   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:46:50.799480   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:46:50.799492   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:46:50.799500   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:46:53.325521   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:53.467291   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:46:53.487710   46043 logs.go:277] 0 containers: []
	W0224 15:46:53.487723   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:46:53.487792   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:46:53.507535   46043 logs.go:277] 0 containers: []
	W0224 15:46:53.507548   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:46:53.507626   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:46:53.527218   46043 logs.go:277] 0 containers: []
	W0224 15:46:53.527233   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:46:53.527317   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:46:53.551136   46043 logs.go:277] 0 containers: []
	W0224 15:46:53.551149   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:46:53.551214   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:46:53.571797   46043 logs.go:277] 0 containers: []
	W0224 15:46:53.571811   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:46:53.571882   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:46:53.591801   46043 logs.go:277] 0 containers: []
	W0224 15:46:53.591814   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:46:53.591883   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:46:53.611669   46043 logs.go:277] 0 containers: []
	W0224 15:46:53.611683   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:46:53.611766   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:46:53.632941   46043 logs.go:277] 0 containers: []
	W0224 15:46:53.632954   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:46:53.632962   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:46:53.632973   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:46:55.682522   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04951754s)
	I0224 15:46:55.682649   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:46:55.682662   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:46:55.724415   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:46:55.724438   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:46:55.737421   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:46:55.737441   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:46:55.797093   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:46:55.797108   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:46:55.797115   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:46:58.320758   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:46:58.467278   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:46:58.487411   46043 logs.go:277] 0 containers: []
	W0224 15:46:58.487435   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:46:58.487514   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:46:58.506182   46043 logs.go:277] 0 containers: []
	W0224 15:46:58.506195   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:46:58.506266   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:46:58.524948   46043 logs.go:277] 0 containers: []
	W0224 15:46:58.524962   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:46:58.525033   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:46:58.545495   46043 logs.go:277] 0 containers: []
	W0224 15:46:58.545511   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:46:58.545584   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:46:58.565770   46043 logs.go:277] 0 containers: []
	W0224 15:46:58.565783   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:46:58.565859   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:46:58.584903   46043 logs.go:277] 0 containers: []
	W0224 15:46:58.584917   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:46:58.584992   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:46:58.605154   46043 logs.go:277] 0 containers: []
	W0224 15:46:58.605169   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:46:58.605245   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:46:58.624811   46043 logs.go:277] 0 containers: []
	W0224 15:46:58.624829   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:46:58.624836   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:46:58.624851   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:46:58.665047   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:46:58.665062   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:46:58.677210   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:46:58.677224   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:46:58.734429   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:46:58.734441   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:46:58.734449   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:46:58.758694   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:46:58.758713   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:00.804793   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046048189s)
	I0224 15:47:03.305510   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:03.467373   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:03.488419   46043 logs.go:277] 0 containers: []
	W0224 15:47:03.488433   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:03.488503   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:03.507510   46043 logs.go:277] 0 containers: []
	W0224 15:47:03.507523   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:03.507590   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:03.526505   46043 logs.go:277] 0 containers: []
	W0224 15:47:03.526519   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:03.526589   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:03.545330   46043 logs.go:277] 0 containers: []
	W0224 15:47:03.545343   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:03.545414   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:03.565263   46043 logs.go:277] 0 containers: []
	W0224 15:47:03.565276   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:03.565345   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:03.585626   46043 logs.go:277] 0 containers: []
	W0224 15:47:03.585640   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:03.585709   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:03.604761   46043 logs.go:277] 0 containers: []
	W0224 15:47:03.604774   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:03.604841   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:03.623918   46043 logs.go:277] 0 containers: []
	W0224 15:47:03.623932   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:03.623939   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:03.623947   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:03.635797   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:03.635810   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:03.694111   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:03.694128   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:03.694136   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:03.715668   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:03.715683   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:05.762005   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046291706s)
	I0224 15:47:05.762182   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:05.762191   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:08.301267   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:08.467938   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:08.490469   46043 logs.go:277] 0 containers: []
	W0224 15:47:08.490483   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:08.490553   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:08.511821   46043 logs.go:277] 0 containers: []
	W0224 15:47:08.511835   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:08.511903   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:08.531282   46043 logs.go:277] 0 containers: []
	W0224 15:47:08.531297   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:08.531373   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:08.551976   46043 logs.go:277] 0 containers: []
	W0224 15:47:08.551990   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:08.552071   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:08.571614   46043 logs.go:277] 0 containers: []
	W0224 15:47:08.571627   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:08.571700   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:08.591641   46043 logs.go:277] 0 containers: []
	W0224 15:47:08.591654   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:08.591723   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:08.611297   46043 logs.go:277] 0 containers: []
	W0224 15:47:08.611310   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:08.611384   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:08.630256   46043 logs.go:277] 0 containers: []
	W0224 15:47:08.630269   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:08.630276   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:08.630283   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:10.677171   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046856793s)
	I0224 15:47:10.677286   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:10.677294   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:10.715764   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:10.715781   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:10.728021   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:10.728036   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:10.787966   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:10.787979   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:10.787986   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:13.311565   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:13.467514   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:13.492550   46043 logs.go:277] 0 containers: []
	W0224 15:47:13.492564   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:13.492644   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:13.512023   46043 logs.go:277] 0 containers: []
	W0224 15:47:13.512038   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:13.512111   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:13.531706   46043 logs.go:277] 0 containers: []
	W0224 15:47:13.531719   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:13.531790   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:13.551450   46043 logs.go:277] 0 containers: []
	W0224 15:47:13.551465   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:13.551534   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:13.570628   46043 logs.go:277] 0 containers: []
	W0224 15:47:13.570643   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:13.570710   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:13.589168   46043 logs.go:277] 0 containers: []
	W0224 15:47:13.589184   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:13.589261   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:13.608601   46043 logs.go:277] 0 containers: []
	W0224 15:47:13.608615   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:13.608685   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:13.628215   46043 logs.go:277] 0 containers: []
	W0224 15:47:13.628228   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:13.628235   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:13.628243   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:13.683967   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:13.683978   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:13.683985   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:13.705483   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:13.705498   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:15.752162   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046634356s)
	I0224 15:47:15.752272   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:15.752280   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:15.789398   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:15.789412   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:18.302546   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:18.468257   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:18.490587   46043 logs.go:277] 0 containers: []
	W0224 15:47:18.490599   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:18.490666   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:18.510551   46043 logs.go:277] 0 containers: []
	W0224 15:47:18.510565   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:18.510637   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:18.529317   46043 logs.go:277] 0 containers: []
	W0224 15:47:18.529332   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:18.529404   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:18.549037   46043 logs.go:277] 0 containers: []
	W0224 15:47:18.549052   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:18.549119   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:18.568207   46043 logs.go:277] 0 containers: []
	W0224 15:47:18.568228   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:18.568296   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:18.588111   46043 logs.go:277] 0 containers: []
	W0224 15:47:18.588125   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:18.588194   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:18.606810   46043 logs.go:277] 0 containers: []
	W0224 15:47:18.606823   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:18.606897   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:18.626855   46043 logs.go:277] 0 containers: []
	W0224 15:47:18.626868   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:18.626875   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:18.626898   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:18.665334   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:18.665352   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:18.677985   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:18.677998   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:18.732583   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:18.732594   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:18.732600   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:18.753705   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:18.753719   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:20.797877   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044128818s)
	I0224 15:47:23.298748   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:23.466568   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:23.487113   46043 logs.go:277] 0 containers: []
	W0224 15:47:23.487126   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:23.487196   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:23.508028   46043 logs.go:277] 0 containers: []
	W0224 15:47:23.508042   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:23.508113   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:23.528308   46043 logs.go:277] 0 containers: []
	W0224 15:47:23.528322   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:23.528412   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:23.558085   46043 logs.go:277] 0 containers: []
	W0224 15:47:23.558099   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:23.558169   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:23.579001   46043 logs.go:277] 0 containers: []
	W0224 15:47:23.579014   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:23.579082   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:23.599156   46043 logs.go:277] 0 containers: []
	W0224 15:47:23.599171   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:23.599242   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:23.618722   46043 logs.go:277] 0 containers: []
	W0224 15:47:23.618735   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:23.618808   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:23.638997   46043 logs.go:277] 0 containers: []
	W0224 15:47:23.639011   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:23.639018   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:23.639025   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:23.678304   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:23.678319   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:23.690708   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:23.690722   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:23.744031   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:23.744042   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:23.744049   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:23.765627   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:23.765641   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:25.808917   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043244511s)
	I0224 15:47:28.309444   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:28.468700   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:28.490608   46043 logs.go:277] 0 containers: []
	W0224 15:47:28.490622   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:28.490696   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:28.509755   46043 logs.go:277] 0 containers: []
	W0224 15:47:28.509768   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:28.509834   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:28.529299   46043 logs.go:277] 0 containers: []
	W0224 15:47:28.529319   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:28.529392   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:28.549073   46043 logs.go:277] 0 containers: []
	W0224 15:47:28.549088   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:28.549158   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:28.568269   46043 logs.go:277] 0 containers: []
	W0224 15:47:28.568283   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:28.568352   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:28.587997   46043 logs.go:277] 0 containers: []
	W0224 15:47:28.588010   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:28.588080   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:28.608700   46043 logs.go:277] 0 containers: []
	W0224 15:47:28.608712   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:28.608779   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:28.627895   46043 logs.go:277] 0 containers: []
	W0224 15:47:28.627909   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:28.627917   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:28.627925   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:28.683376   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:28.683392   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:28.683399   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:28.704477   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:28.704492   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:30.749429   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044907416s)
	I0224 15:47:30.749533   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:30.749541   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:30.787720   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:30.787735   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:33.300636   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:33.468850   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:33.491090   46043 logs.go:277] 0 containers: []
	W0224 15:47:33.491104   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:33.491171   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:33.511103   46043 logs.go:277] 0 containers: []
	W0224 15:47:33.511116   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:33.511185   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:33.530394   46043 logs.go:277] 0 containers: []
	W0224 15:47:33.530407   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:33.530477   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:33.549624   46043 logs.go:277] 0 containers: []
	W0224 15:47:33.549636   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:33.549708   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:33.569583   46043 logs.go:277] 0 containers: []
	W0224 15:47:33.569596   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:33.569666   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:33.589713   46043 logs.go:277] 0 containers: []
	W0224 15:47:33.589726   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:33.589804   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:33.608607   46043 logs.go:277] 0 containers: []
	W0224 15:47:33.608621   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:33.608688   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:33.628476   46043 logs.go:277] 0 containers: []
	W0224 15:47:33.628488   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:33.628495   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:33.628502   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:35.675326   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046792866s)
	I0224 15:47:35.675459   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:35.675469   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:35.713096   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:35.713110   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:35.725777   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:35.725790   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:35.781134   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:35.781145   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:35.781152   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:38.302832   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:38.466636   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:38.486741   46043 logs.go:277] 0 containers: []
	W0224 15:47:38.486757   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:38.486827   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:38.508119   46043 logs.go:277] 0 containers: []
	W0224 15:47:38.508134   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:38.508206   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:38.527371   46043 logs.go:277] 0 containers: []
	W0224 15:47:38.527392   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:38.527474   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:38.553831   46043 logs.go:277] 0 containers: []
	W0224 15:47:38.553844   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:38.553913   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:38.573510   46043 logs.go:277] 0 containers: []
	W0224 15:47:38.573522   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:38.573590   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:38.593599   46043 logs.go:277] 0 containers: []
	W0224 15:47:38.593612   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:38.593688   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:38.613565   46043 logs.go:277] 0 containers: []
	W0224 15:47:38.613579   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:38.613648   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:38.632950   46043 logs.go:277] 0 containers: []
	W0224 15:47:38.632964   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:38.632971   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:38.632979   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:38.670609   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:38.670624   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:38.682873   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:38.682886   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:38.736937   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:38.736947   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:38.736954   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:38.758462   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:38.758478   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:40.801877   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043369255s)
	I0224 15:47:43.302615   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:43.466808   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:43.488192   46043 logs.go:277] 0 containers: []
	W0224 15:47:43.488207   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:43.488276   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:43.507496   46043 logs.go:277] 0 containers: []
	W0224 15:47:43.507508   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:43.507577   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:43.526746   46043 logs.go:277] 0 containers: []
	W0224 15:47:43.526761   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:43.526830   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:43.545988   46043 logs.go:277] 0 containers: []
	W0224 15:47:43.546004   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:43.546075   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:43.565671   46043 logs.go:277] 0 containers: []
	W0224 15:47:43.565684   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:43.565751   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:43.584693   46043 logs.go:277] 0 containers: []
	W0224 15:47:43.584707   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:43.584779   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:43.605028   46043 logs.go:277] 0 containers: []
	W0224 15:47:43.605042   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:43.605107   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:43.624624   46043 logs.go:277] 0 containers: []
	W0224 15:47:43.624637   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:43.624644   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:43.624652   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:43.680421   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:43.680434   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:43.680443   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:43.701334   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:43.701350   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:45.750385   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049005682s)
	I0224 15:47:45.750513   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:45.750521   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:45.787951   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:45.787966   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:48.301965   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:48.467131   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:48.489165   46043 logs.go:277] 0 containers: []
	W0224 15:47:48.489180   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:48.489249   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:48.509072   46043 logs.go:277] 0 containers: []
	W0224 15:47:48.509084   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:48.509152   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:48.529414   46043 logs.go:277] 0 containers: []
	W0224 15:47:48.529426   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:48.529493   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:48.548592   46043 logs.go:277] 0 containers: []
	W0224 15:47:48.548605   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:48.548671   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:48.568339   46043 logs.go:277] 0 containers: []
	W0224 15:47:48.568352   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:48.568420   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:48.587704   46043 logs.go:277] 0 containers: []
	W0224 15:47:48.587718   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:48.587786   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:48.607397   46043 logs.go:277] 0 containers: []
	W0224 15:47:48.607409   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:48.607477   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:48.627140   46043 logs.go:277] 0 containers: []
	W0224 15:47:48.627154   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:48.627161   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:48.627171   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:48.667848   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:48.667864   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:48.680404   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:48.680418   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:48.735410   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:48.735422   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:48.735429   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:48.756418   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:48.756433   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:50.802045   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045581724s)
	I0224 15:47:53.302700   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:53.466843   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:53.488221   46043 logs.go:277] 0 containers: []
	W0224 15:47:53.488236   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:53.488305   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:53.509165   46043 logs.go:277] 0 containers: []
	W0224 15:47:53.509177   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:53.509245   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:53.529404   46043 logs.go:277] 0 containers: []
	W0224 15:47:53.529419   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:53.529491   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:53.556867   46043 logs.go:277] 0 containers: []
	W0224 15:47:53.556881   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:53.556948   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:53.575853   46043 logs.go:277] 0 containers: []
	W0224 15:47:53.575866   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:53.575932   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:53.595076   46043 logs.go:277] 0 containers: []
	W0224 15:47:53.595089   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:53.595154   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:53.614122   46043 logs.go:277] 0 containers: []
	W0224 15:47:53.614136   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:53.614204   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:53.633956   46043 logs.go:277] 0 containers: []
	W0224 15:47:53.633970   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:53.633977   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:53.633985   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:53.672057   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:53.672070   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:53.684447   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:53.684461   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:53.739161   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:53.739172   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:53.739179   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:53.760298   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:53.760313   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:47:55.803694   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043350028s)
	I0224 15:47:58.304950   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:47:58.468975   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:47:58.491096   46043 logs.go:277] 0 containers: []
	W0224 15:47:58.491108   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:47:58.491176   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:47:58.511130   46043 logs.go:277] 0 containers: []
	W0224 15:47:58.511144   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:47:58.511211   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:47:58.530020   46043 logs.go:277] 0 containers: []
	W0224 15:47:58.530034   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:47:58.530108   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:47:58.549520   46043 logs.go:277] 0 containers: []
	W0224 15:47:58.549535   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:47:58.549620   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:47:58.568917   46043 logs.go:277] 0 containers: []
	W0224 15:47:58.568930   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:47:58.568997   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:47:58.587699   46043 logs.go:277] 0 containers: []
	W0224 15:47:58.587712   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:47:58.587782   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:47:58.607382   46043 logs.go:277] 0 containers: []
	W0224 15:47:58.607395   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:47:58.607465   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:47:58.626706   46043 logs.go:277] 0 containers: []
	W0224 15:47:58.626719   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:47:58.626727   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:47:58.626734   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:47:58.665750   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:47:58.665767   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:47:58.678752   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:47:58.678782   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:47:58.733869   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:47:58.733882   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:47:58.733889   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:47:58.755678   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:47:58.755696   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:00.804394   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048668105s)
	I0224 15:48:03.305199   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:03.468812   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:03.490197   46043 logs.go:277] 0 containers: []
	W0224 15:48:03.490211   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:03.490280   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:03.509428   46043 logs.go:277] 0 containers: []
	W0224 15:48:03.509444   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:03.509515   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:03.529876   46043 logs.go:277] 0 containers: []
	W0224 15:48:03.529891   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:03.529979   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:03.550215   46043 logs.go:277] 0 containers: []
	W0224 15:48:03.550228   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:03.550300   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:03.569951   46043 logs.go:277] 0 containers: []
	W0224 15:48:03.569964   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:03.570032   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:03.589317   46043 logs.go:277] 0 containers: []
	W0224 15:48:03.589330   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:03.589396   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:03.608399   46043 logs.go:277] 0 containers: []
	W0224 15:48:03.608412   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:03.608479   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:03.627457   46043 logs.go:277] 0 containers: []
	W0224 15:48:03.627470   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:03.627478   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:03.627485   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:03.666429   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:03.666445   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:03.678743   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:03.678757   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:03.733402   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:03.733414   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:03.733428   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:03.755029   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:03.755043   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:05.801478   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046404601s)
	I0224 15:48:08.302147   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:08.468928   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:08.489725   46043 logs.go:277] 0 containers: []
	W0224 15:48:08.489739   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:08.489808   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:08.509528   46043 logs.go:277] 0 containers: []
	W0224 15:48:08.509543   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:08.509616   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:08.529320   46043 logs.go:277] 0 containers: []
	W0224 15:48:08.529336   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:08.529407   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:08.556232   46043 logs.go:277] 0 containers: []
	W0224 15:48:08.556251   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:08.556334   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:08.578410   46043 logs.go:277] 0 containers: []
	W0224 15:48:08.578422   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:08.578493   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:08.597631   46043 logs.go:277] 0 containers: []
	W0224 15:48:08.597651   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:08.597729   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:08.618241   46043 logs.go:277] 0 containers: []
	W0224 15:48:08.618256   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:08.618327   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:08.636887   46043 logs.go:277] 0 containers: []
	W0224 15:48:08.636900   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:08.636907   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:08.636917   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:10.681281   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044333807s)
	I0224 15:48:10.681465   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:10.681474   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:10.720573   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:10.720592   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:10.733545   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:10.733565   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:10.789701   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:10.789713   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:10.789721   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:13.313248   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:13.468813   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:13.489700   46043 logs.go:277] 0 containers: []
	W0224 15:48:13.489717   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:13.489791   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:13.509531   46043 logs.go:277] 0 containers: []
	W0224 15:48:13.509544   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:13.509615   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:13.528562   46043 logs.go:277] 0 containers: []
	W0224 15:48:13.528576   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:13.528644   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:13.547765   46043 logs.go:277] 0 containers: []
	W0224 15:48:13.547778   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:13.547848   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:13.568791   46043 logs.go:277] 0 containers: []
	W0224 15:48:13.568804   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:13.568873   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:13.588461   46043 logs.go:277] 0 containers: []
	W0224 15:48:13.588476   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:13.588547   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:13.607882   46043 logs.go:277] 0 containers: []
	W0224 15:48:13.607895   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:13.607962   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:13.626797   46043 logs.go:277] 0 containers: []
	W0224 15:48:13.626810   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:13.626817   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:13.626824   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:13.681644   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:13.681655   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:13.681662   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:13.702531   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:13.702547   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:15.747619   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045042594s)
	I0224 15:48:15.747729   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:15.747736   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:15.785831   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:15.785846   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:18.298519   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:18.467143   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:18.489140   46043 logs.go:277] 0 containers: []
	W0224 15:48:18.489153   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:18.489221   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:18.507986   46043 logs.go:277] 0 containers: []
	W0224 15:48:18.508001   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:18.508070   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:18.527098   46043 logs.go:277] 0 containers: []
	W0224 15:48:18.527110   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:18.527176   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:18.547193   46043 logs.go:277] 0 containers: []
	W0224 15:48:18.547206   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:18.547274   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:18.566850   46043 logs.go:277] 0 containers: []
	W0224 15:48:18.566862   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:18.566932   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:18.586369   46043 logs.go:277] 0 containers: []
	W0224 15:48:18.586381   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:18.586449   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:18.605639   46043 logs.go:277] 0 containers: []
	W0224 15:48:18.605652   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:18.605727   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:18.625356   46043 logs.go:277] 0 containers: []
	W0224 15:48:18.625370   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:18.625377   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:18.625385   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:18.663902   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:18.663918   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:18.676361   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:18.676376   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:18.731018   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:18.731031   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:18.731040   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:18.752454   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:18.752470   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:20.798720   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046220169s)
	I0224 15:48:23.300513   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:23.467042   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:23.486450   46043 logs.go:277] 0 containers: []
	W0224 15:48:23.486463   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:23.486534   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:23.507440   46043 logs.go:277] 0 containers: []
	W0224 15:48:23.507454   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:23.507523   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:23.528346   46043 logs.go:277] 0 containers: []
	W0224 15:48:23.528361   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:23.528434   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:23.554753   46043 logs.go:277] 0 containers: []
	W0224 15:48:23.554769   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:23.554848   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:23.574176   46043 logs.go:277] 0 containers: []
	W0224 15:48:23.574191   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:23.574261   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:23.594194   46043 logs.go:277] 0 containers: []
	W0224 15:48:23.594208   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:23.594284   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:23.613987   46043 logs.go:277] 0 containers: []
	W0224 15:48:23.614001   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:23.614071   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:23.633210   46043 logs.go:277] 0 containers: []
	W0224 15:48:23.633224   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:23.633232   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:23.633239   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:23.671371   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:23.671387   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:23.684236   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:23.684250   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:23.738753   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:23.738764   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:23.738775   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:23.759607   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:23.759620   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:25.803355   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043704798s)
	I0224 15:48:28.304277   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:28.469241   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:28.490471   46043 logs.go:277] 0 containers: []
	W0224 15:48:28.490485   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:28.490555   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:28.510115   46043 logs.go:277] 0 containers: []
	W0224 15:48:28.510128   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:28.510195   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:28.531110   46043 logs.go:277] 0 containers: []
	W0224 15:48:28.531123   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:28.531191   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:28.550968   46043 logs.go:277] 0 containers: []
	W0224 15:48:28.550982   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:28.551050   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:28.569909   46043 logs.go:277] 0 containers: []
	W0224 15:48:28.569922   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:28.569992   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:28.589393   46043 logs.go:277] 0 containers: []
	W0224 15:48:28.589406   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:28.589477   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:28.610078   46043 logs.go:277] 0 containers: []
	W0224 15:48:28.610092   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:28.610177   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:28.629644   46043 logs.go:277] 0 containers: []
	W0224 15:48:28.629657   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:28.629667   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:28.629676   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:28.667649   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:28.667665   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:28.679808   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:28.679823   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:28.735689   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:28.735708   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:28.735715   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:28.757529   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:28.757547   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:30.805460   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047882864s)
	I0224 15:48:33.306238   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:33.468189   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:33.489925   46043 logs.go:277] 0 containers: []
	W0224 15:48:33.489943   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:33.490011   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:33.509375   46043 logs.go:277] 0 containers: []
	W0224 15:48:33.509388   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:33.509456   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:33.528467   46043 logs.go:277] 0 containers: []
	W0224 15:48:33.528481   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:33.528552   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:33.548192   46043 logs.go:277] 0 containers: []
	W0224 15:48:33.548207   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:33.548282   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:33.567235   46043 logs.go:277] 0 containers: []
	W0224 15:48:33.567247   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:33.567316   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:33.586264   46043 logs.go:277] 0 containers: []
	W0224 15:48:33.586278   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:33.586344   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:33.606366   46043 logs.go:277] 0 containers: []
	W0224 15:48:33.606379   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:33.606444   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:33.624476   46043 logs.go:277] 0 containers: []
	W0224 15:48:33.624489   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:33.624497   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:33.624504   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:33.646229   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:33.646245   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:35.693821   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047544853s)
	I0224 15:48:35.693929   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:35.693937   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:35.732196   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:35.732212   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:35.744562   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:35.744579   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:35.798634   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:38.299879   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:38.467184   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:38.486927   46043 logs.go:277] 0 containers: []
	W0224 15:48:38.486944   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:38.487022   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:38.507296   46043 logs.go:277] 0 containers: []
	W0224 15:48:38.507312   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:38.507388   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:38.527747   46043 logs.go:277] 0 containers: []
	W0224 15:48:38.527760   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:38.527826   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:38.555484   46043 logs.go:277] 0 containers: []
	W0224 15:48:38.555497   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:38.555567   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:38.574496   46043 logs.go:277] 0 containers: []
	W0224 15:48:38.574509   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:38.574578   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:38.593697   46043 logs.go:277] 0 containers: []
	W0224 15:48:38.593709   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:38.593784   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:38.613149   46043 logs.go:277] 0 containers: []
	W0224 15:48:38.613161   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:38.613236   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:38.631996   46043 logs.go:277] 0 containers: []
	W0224 15:48:38.632010   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:38.632019   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:38.632026   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:38.671003   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:38.671017   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:38.683571   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:38.683584   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:38.738105   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:38.738118   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:38.738125   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:38.759820   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:38.759837   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:40.803649   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043782086s)
	I0224 15:48:43.305673   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:43.469401   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:43.491094   46043 logs.go:277] 0 containers: []
	W0224 15:48:43.491107   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:43.491178   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:43.510480   46043 logs.go:277] 0 containers: []
	W0224 15:48:43.510492   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:43.510561   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:43.530525   46043 logs.go:277] 0 containers: []
	W0224 15:48:43.530538   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:43.530608   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:43.550731   46043 logs.go:277] 0 containers: []
	W0224 15:48:43.550744   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:43.550813   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:43.570590   46043 logs.go:277] 0 containers: []
	W0224 15:48:43.570604   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:43.570674   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:43.590313   46043 logs.go:277] 0 containers: []
	W0224 15:48:43.590328   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:43.590396   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:43.609383   46043 logs.go:277] 0 containers: []
	W0224 15:48:43.609396   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:43.609466   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:43.629239   46043 logs.go:277] 0 containers: []
	W0224 15:48:43.629252   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:43.629259   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:43.629267   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:43.667420   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:43.667435   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:43.679605   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:43.679619   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:43.735818   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:43.735832   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:43.735842   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:43.757257   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:43.757274   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:45.803887   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046583061s)
	I0224 15:48:48.304218   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:48.468537   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:48.490501   46043 logs.go:277] 0 containers: []
	W0224 15:48:48.490514   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:48.490584   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:48.509547   46043 logs.go:277] 0 containers: []
	W0224 15:48:48.509559   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:48.509624   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:48.528790   46043 logs.go:277] 0 containers: []
	W0224 15:48:48.528804   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:48.528875   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:48.548171   46043 logs.go:277] 0 containers: []
	W0224 15:48:48.548187   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:48.548255   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:48.568747   46043 logs.go:277] 0 containers: []
	W0224 15:48:48.568760   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:48.568826   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:48.587966   46043 logs.go:277] 0 containers: []
	W0224 15:48:48.587979   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:48.588071   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:48.608822   46043 logs.go:277] 0 containers: []
	W0224 15:48:48.608834   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:48.608901   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:48.628113   46043 logs.go:277] 0 containers: []
	W0224 15:48:48.628127   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:48.628134   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:48.628141   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:48.666721   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:48.666736   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:48.679015   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:48.679038   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:48.734118   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:48.734129   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:48.734137   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:48.755032   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:48.755046   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:50.800379   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045303932s)
	I0224 15:48:53.301440   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:53.467482   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:53.487938   46043 logs.go:277] 0 containers: []
	W0224 15:48:53.487952   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:53.488025   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:53.508214   46043 logs.go:277] 0 containers: []
	W0224 15:48:53.508227   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:53.508294   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:53.528494   46043 logs.go:277] 0 containers: []
	W0224 15:48:53.528510   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:53.528585   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:53.554795   46043 logs.go:277] 0 containers: []
	W0224 15:48:53.554808   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:53.554875   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:53.574841   46043 logs.go:277] 0 containers: []
	W0224 15:48:53.574854   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:53.574923   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:53.593809   46043 logs.go:277] 0 containers: []
	W0224 15:48:53.593821   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:53.593887   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:53.613353   46043 logs.go:277] 0 containers: []
	W0224 15:48:53.613365   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:53.613433   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:53.632557   46043 logs.go:277] 0 containers: []
	W0224 15:48:53.632569   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:53.632576   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:53.632584   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:53.670711   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:53.670725   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:53.682668   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:53.682683   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:53.736768   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:53.736778   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:53.736785   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:53.758241   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:53.758257   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:48:55.801908   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043620653s)
	I0224 15:48:58.303040   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:48:58.468041   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:48:58.489521   46043 logs.go:277] 0 containers: []
	W0224 15:48:58.489534   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:48:58.489606   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:48:58.509172   46043 logs.go:277] 0 containers: []
	W0224 15:48:58.509184   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:48:58.509251   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:48:58.528717   46043 logs.go:277] 0 containers: []
	W0224 15:48:58.528729   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:48:58.528794   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:48:58.547703   46043 logs.go:277] 0 containers: []
	W0224 15:48:58.547716   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:48:58.547784   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:48:58.567930   46043 logs.go:277] 0 containers: []
	W0224 15:48:58.567942   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:48:58.568011   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:48:58.587457   46043 logs.go:277] 0 containers: []
	W0224 15:48:58.587471   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:48:58.587540   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:48:58.606295   46043 logs.go:277] 0 containers: []
	W0224 15:48:58.606309   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:48:58.606377   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:48:58.626239   46043 logs.go:277] 0 containers: []
	W0224 15:48:58.626252   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:48:58.626259   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:48:58.626266   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:48:58.664530   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:48:58.664544   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:48:58.677305   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:48:58.677319   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:48:58.732344   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:48:58.732357   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:48:58.732365   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:48:58.755046   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:48:58.755064   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:49:00.801328   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046233569s)
	I0224 15:49:03.302498   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:49:03.468568   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:49:03.491711   46043 logs.go:277] 0 containers: []
	W0224 15:49:03.491725   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:49:03.491796   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:49:03.510486   46043 logs.go:277] 0 containers: []
	W0224 15:49:03.510499   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:49:03.510569   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:49:03.530323   46043 logs.go:277] 0 containers: []
	W0224 15:49:03.530337   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:49:03.530403   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:49:03.550269   46043 logs.go:277] 0 containers: []
	W0224 15:49:03.550282   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:49:03.550347   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:49:03.569376   46043 logs.go:277] 0 containers: []
	W0224 15:49:03.569390   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:49:03.569456   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:49:03.589644   46043 logs.go:277] 0 containers: []
	W0224 15:49:03.589658   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:49:03.589728   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:49:03.609114   46043 logs.go:277] 0 containers: []
	W0224 15:49:03.609134   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:49:03.609210   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:49:03.627886   46043 logs.go:277] 0 containers: []
	W0224 15:49:03.627898   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:49:03.627905   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:49:03.627912   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:49:05.673749   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045805468s)
	I0224 15:49:05.673855   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:49:05.673862   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:49:05.711416   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:49:05.711431   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:49:05.723995   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:49:05.724008   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:49:05.776830   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:49:05.776849   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:49:05.776856   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:49:08.298426   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:49:08.468499   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:49:08.489029   46043 logs.go:277] 0 containers: []
	W0224 15:49:08.489043   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:49:08.489115   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:49:08.508874   46043 logs.go:277] 0 containers: []
	W0224 15:49:08.508888   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:49:08.508960   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:49:08.528418   46043 logs.go:277] 0 containers: []
	W0224 15:49:08.528432   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:49:08.528508   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:49:08.554978   46043 logs.go:277] 0 containers: []
	W0224 15:49:08.554992   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:49:08.555068   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:49:08.574849   46043 logs.go:277] 0 containers: []
	W0224 15:49:08.574863   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:49:08.574939   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:49:08.595355   46043 logs.go:277] 0 containers: []
	W0224 15:49:08.595367   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:49:08.595436   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:49:08.614928   46043 logs.go:277] 0 containers: []
	W0224 15:49:08.614939   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:49:08.614996   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:49:08.635089   46043 logs.go:277] 0 containers: []
	W0224 15:49:08.635103   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:49:08.635110   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:49:08.635117   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:49:08.656788   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:49:08.656802   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:49:10.704038   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047205588s)
	I0224 15:49:10.704145   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:49:10.704154   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:49:10.742047   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:49:10.742066   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:49:10.755267   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:49:10.755287   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:49:10.810712   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:49:13.312985   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:49:13.468448   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:49:13.489965   46043 logs.go:277] 0 containers: []
	W0224 15:49:13.489979   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:49:13.490046   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:49:13.511516   46043 logs.go:277] 0 containers: []
	W0224 15:49:13.511530   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:49:13.511600   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:49:13.531765   46043 logs.go:277] 0 containers: []
	W0224 15:49:13.531779   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:49:13.531850   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:49:13.552087   46043 logs.go:277] 0 containers: []
	W0224 15:49:13.552101   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:49:13.552168   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:49:13.571946   46043 logs.go:277] 0 containers: []
	W0224 15:49:13.571961   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:49:13.572032   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:49:13.592265   46043 logs.go:277] 0 containers: []
	W0224 15:49:13.592278   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:49:13.592347   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:49:13.612111   46043 logs.go:277] 0 containers: []
	W0224 15:49:13.612124   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:49:13.612205   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:49:13.631614   46043 logs.go:277] 0 containers: []
	W0224 15:49:13.631627   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:49:13.631635   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:49:13.631642   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:49:13.670874   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:49:13.670888   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:49:13.682940   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:49:13.682954   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:49:13.737921   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:49:13.737939   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:49:13.737946   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:49:13.759305   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:49:13.759322   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:49:15.802745   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043391946s)
	I0224 15:49:18.303409   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:49:18.468487   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:49:18.490232   46043 logs.go:277] 0 containers: []
	W0224 15:49:18.490246   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:49:18.490315   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:49:18.509952   46043 logs.go:277] 0 containers: []
	W0224 15:49:18.509964   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:49:18.510030   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:49:18.529573   46043 logs.go:277] 0 containers: []
	W0224 15:49:18.529586   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:49:18.529654   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:49:18.549508   46043 logs.go:277] 0 containers: []
	W0224 15:49:18.549521   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:49:18.549591   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:49:18.569018   46043 logs.go:277] 0 containers: []
	W0224 15:49:18.569031   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:49:18.569098   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:49:18.588904   46043 logs.go:277] 0 containers: []
	W0224 15:49:18.588918   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:49:18.588985   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:49:18.608505   46043 logs.go:277] 0 containers: []
	W0224 15:49:18.608518   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:49:18.608590   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:49:18.627686   46043 logs.go:277] 0 containers: []
	W0224 15:49:18.627700   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:49:18.627707   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:49:18.627714   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:49:18.666913   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:49:18.666929   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:49:18.679310   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:49:18.679325   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:49:18.733627   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:49:18.733637   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:49:18.733644   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:49:18.754999   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:49:18.755014   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:49:20.799628   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044584285s)
	I0224 15:49:23.302005   46043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:49:23.468956   46043 kubeadm.go:637] restartCluster took 4m11.230380599s
	W0224 15:49:23.469039   46043 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0224 15:49:23.469057   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0224 15:49:23.880320   46043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:49:23.890349   46043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:49:23.898278   46043 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:49:23.898330   46043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:49:23.905927   46043 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:49:23.905955   46043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:49:23.954087   46043 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0224 15:49:23.954145   46043 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:49:24.121674   46043 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:49:24.121773   46043 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:49:24.121854   46043 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:49:24.277984   46043 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:49:24.278798   46043 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:49:24.285934   46043 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0224 15:49:24.361858   46043 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:49:24.383673   46043 out.go:204]   - Generating certificates and keys ...
	I0224 15:49:24.383748   46043 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:49:24.383838   46043 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:49:24.383944   46043 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 15:49:24.384022   46043 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0224 15:49:24.384111   46043 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 15:49:24.384177   46043 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0224 15:49:24.384249   46043 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0224 15:49:24.384331   46043 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0224 15:49:24.384426   46043 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 15:49:24.384537   46043 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 15:49:24.384572   46043 kubeadm.go:322] [certs] Using the existing "sa" key
	I0224 15:49:24.384626   46043 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:49:24.531772   46043 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:49:24.812612   46043 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:49:24.874448   46043 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:49:25.116720   46043 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:49:25.117258   46043 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:49:25.138244   46043 out.go:204]   - Booting up control plane ...
	I0224 15:49:25.138441   46043 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:49:25.138591   46043 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:49:25.138701   46043 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:49:25.138822   46043 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:49:25.139055   46043 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:50:05.126465   46043 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0224 15:50:05.127486   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:50:05.127687   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:50:10.128830   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:50:10.129051   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:50:20.130601   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:50:20.130864   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:50:40.132072   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:50:40.132276   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:51:20.133825   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:51:20.134061   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:51:20.134072   46043 kubeadm.go:322] 
	I0224 15:51:20.134121   46043 kubeadm.go:322] Unfortunately, an error has occurred:
	I0224 15:51:20.134176   46043 kubeadm.go:322] 	timed out waiting for the condition
	I0224 15:51:20.134188   46043 kubeadm.go:322] 
	I0224 15:51:20.134237   46043 kubeadm.go:322] This error is likely caused by:
	I0224 15:51:20.134337   46043 kubeadm.go:322] 	- The kubelet is not running
	I0224 15:51:20.134524   46043 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 15:51:20.134537   46043 kubeadm.go:322] 
	I0224 15:51:20.134655   46043 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 15:51:20.134698   46043 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0224 15:51:20.134739   46043 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0224 15:51:20.134753   46043 kubeadm.go:322] 
	I0224 15:51:20.134884   46043 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 15:51:20.134996   46043 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0224 15:51:20.135089   46043 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0224 15:51:20.135151   46043 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0224 15:51:20.135242   46043 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0224 15:51:20.135324   46043 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0224 15:51:20.138139   46043 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 15:51:20.138206   46043 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 15:51:20.138303   46043 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0224 15:51:20.138399   46043 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:51:20.138471   46043 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 15:51:20.138544   46043 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0224 15:51:20.138668   46043 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0224 15:51:20.138696   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0224 15:51:20.550872   46043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:51:20.561191   46043 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 15:51:20.561247   46043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:51:20.568757   46043 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 15:51:20.568781   46043 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 15:51:20.616091   46043 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0224 15:51:20.616149   46043 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 15:51:20.785239   46043 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 15:51:20.785341   46043 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 15:51:20.785421   46043 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 15:51:20.941926   46043 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 15:51:20.942769   46043 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 15:51:20.949338   46043 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0224 15:51:21.017759   46043 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 15:51:21.065951   46043 out.go:204]   - Generating certificates and keys ...
	I0224 15:51:21.066060   46043 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 15:51:21.066164   46043 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 15:51:21.066272   46043 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0224 15:51:21.066343   46043 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0224 15:51:21.066405   46043 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0224 15:51:21.066452   46043 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0224 15:51:21.066510   46043 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0224 15:51:21.066575   46043 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0224 15:51:21.066666   46043 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0224 15:51:21.066759   46043 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0224 15:51:21.066795   46043 kubeadm.go:322] [certs] Using the existing "sa" key
	I0224 15:51:21.066874   46043 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 15:51:21.445862   46043 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 15:51:21.661660   46043 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 15:51:21.840024   46043 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 15:51:21.896748   46043 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 15:51:21.897636   46043 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 15:51:21.920442   46043 out.go:204]   - Booting up control plane ...
	I0224 15:51:21.920654   46043 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 15:51:21.920800   46043 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 15:51:21.920927   46043 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 15:51:21.921187   46043 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 15:51:21.921438   46043 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 15:52:01.906934   46043 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0224 15:52:01.907796   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:52:01.908008   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:52:06.909351   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:52:06.909599   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:52:16.910715   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:52:16.910916   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:52:36.912747   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:52:36.912974   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:53:16.914630   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:53:16.914882   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:53:16.914899   46043 kubeadm.go:322] 
	I0224 15:53:16.914979   46043 kubeadm.go:322] Unfortunately, an error has occurred:
	I0224 15:53:16.915043   46043 kubeadm.go:322] 	timed out waiting for the condition
	I0224 15:53:16.915054   46043 kubeadm.go:322] 
	I0224 15:53:16.915113   46043 kubeadm.go:322] This error is likely caused by:
	I0224 15:53:16.915150   46043 kubeadm.go:322] 	- The kubelet is not running
	I0224 15:53:16.915294   46043 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 15:53:16.915309   46043 kubeadm.go:322] 
	I0224 15:53:16.915480   46043 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 15:53:16.915537   46043 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0224 15:53:16.915594   46043 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0224 15:53:16.915604   46043 kubeadm.go:322] 
	I0224 15:53:16.915722   46043 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 15:53:16.915838   46043 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0224 15:53:16.915942   46043 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0224 15:53:16.915990   46043 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0224 15:53:16.916082   46043 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0224 15:53:16.916110   46043 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0224 15:53:16.918426   46043 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 15:53:16.918490   46043 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 15:53:16.918602   46043 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0224 15:53:16.918692   46043 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:53:16.918768   46043 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 15:53:16.918833   46043 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0224 15:53:16.918853   46043 kubeadm.go:403] StartCluster complete in 8m4.70614135s
	I0224 15:53:16.918948   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:53:16.939129   46043 logs.go:277] 0 containers: []
	W0224 15:53:16.939142   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:53:16.939209   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:53:16.959355   46043 logs.go:277] 0 containers: []
	W0224 15:53:16.975579   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:53:16.975678   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:53:16.995914   46043 logs.go:277] 0 containers: []
	W0224 15:53:16.995928   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:53:16.995998   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:53:17.017414   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.017429   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:53:17.017502   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:53:17.053061   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.053074   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:53:17.053140   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:53:17.072468   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.072481   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:53:17.072550   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:53:17.092166   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.092180   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:53:17.092247   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:53:17.111270   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.111293   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:53:17.111305   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:53:17.111313   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:53:17.154353   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:53:17.154371   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:53:17.167202   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:53:17.167216   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:53:17.219728   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:53:17.219738   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:53:17.219745   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:53:17.240860   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:53:17.240874   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:53:19.285624   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044720147s)
	W0224 15:53:19.285734   46043 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 15:53:19.285749   46043 out.go:239] * 
	* 
	W0224 15:53:19.285872   46043 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 15:53:19.285893   46043 out.go:239] * 
	* 
	W0224 15:53:19.286556   46043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 15:53:19.329163   46043 out.go:177] 
	W0224 15:53:19.371265   46043 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 15:53:19.371366   46043 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 15:53:19.371412   46043 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 15:53:19.430248   46043 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-583000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-583000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-583000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa",
	        "Created": "2023-02-24T23:39:27.334833203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 677863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:45:08.286781852Z",
	            "FinishedAt": "2023-02-24T23:45:05.367336651Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hosts",
	        "LogPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa-json.log",
	        "Name": "/old-k8s-version-583000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-583000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-583000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-583000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-583000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-583000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3947a0f2180836ac916ac27cc999772cb08b4096aeffcc7de4c5c0d9b03b291e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61760"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61761"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61762"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61759"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3947a0f21808",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-583000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9cf258058c71",
	                        "old-k8s-version-583000"
	                    ],
	                    "NetworkID": "2385c01da446b4577899b64cfb7c6c9559167bd3e5ad2b3a0a47d05890f119a8",
	                    "EndpointID": "4f2e701947effd2e133234f8a53a9152bf92712aec3312653e8d2e8dfb2ddc47",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 2 (442.810684ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-583000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-583000 logs -n 25: (3.432126798s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p false-416000 sudo                              | false-416000           | jenkins | v1.29.0 | 24 Feb 23 15:39 PST | 24 Feb 23 15:39 PST |
	|         | containerd config dump                            |                        |         |         |                     |                     |
	| ssh     | -p false-416000 sudo systemctl                    | false-416000           | jenkins | v1.29.0 | 24 Feb 23 15:39 PST |                     |
	|         | status crio --all --full                          |                        |         |         |                     |                     |
	|         | --no-pager                                        |                        |         |         |                     |                     |
	| ssh     | -p false-416000 sudo systemctl                    | false-416000           | jenkins | v1.29.0 | 24 Feb 23 15:39 PST | 24 Feb 23 15:39 PST |
	|         | cat crio --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p false-416000 sudo find                         | false-416000           | jenkins | v1.29.0 | 24 Feb 23 15:39 PST | 24 Feb 23 15:39 PST |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |         |                     |                     |
	| ssh     | -p false-416000 sudo crio                         | false-416000           | jenkins | v1.29.0 | 24 Feb 23 15:39 PST | 24 Feb 23 15:39 PST |
	|         | config                                            |                        |         |         |                     |                     |
	| delete  | -p false-416000                                   | false-416000           | jenkins | v1.29.0 | 24 Feb 23 15:39 PST | 24 Feb 23 15:39 PST |
	| start   | -p no-preload-540000                              | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:39 PST | 24 Feb 23 15:41 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-540000        | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:41 PST | 24 Feb 23 15:41 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-540000                              | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:41 PST | 24 Feb 23 15:41 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-540000             | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:41 PST | 24 Feb 23 15:41 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-540000                              | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:41 PST | 24 Feb 23 15:51 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-583000   | old-k8s-version-583000 | jenkins | v1.29.0 | 24 Feb 23 15:43 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-583000                         | old-k8s-version-583000 | jenkins | v1.29.0 | 24 Feb 23 15:45 PST | 24 Feb 23 15:45 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-583000        | old-k8s-version-583000 | jenkins | v1.29.0 | 24 Feb 23 15:45 PST | 24 Feb 23 15:45 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-583000                         | old-k8s-version-583000 | jenkins | v1.29.0 | 24 Feb 23 15:45 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-540000 sudo                         | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-540000                              | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-540000                              | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-540000                              | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	| delete  | -p no-preload-540000                              | no-preload-540000      | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	| start   | -p embed-certs-451000                             | embed-certs-451000     | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:52 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451000       | embed-certs-451000     | jenkins | v1.29.0 | 24 Feb 23 15:52 PST | 24 Feb 23 15:52 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-451000                             | embed-certs-451000     | jenkins | v1.29.0 | 24 Feb 23 15:52 PST | 24 Feb 23 15:52 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451000            | embed-certs-451000     | jenkins | v1.29.0 | 24 Feb 23 15:52 PST | 24 Feb 23 15:52 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-451000                             | embed-certs-451000     | jenkins | v1.29.0 | 24 Feb 23 15:52 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 15:52:39
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 15:52:39.132941   46816 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:52:39.133119   46816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:52:39.133123   46816 out.go:309] Setting ErrFile to fd 2...
	I0224 15:52:39.133127   46816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:52:39.133234   46816 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:52:39.134603   46816 out.go:303] Setting JSON to false
	I0224 15:52:39.152906   46816 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10333,"bootTime":1677272426,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 15:52:39.152990   46816 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 15:52:39.174339   46816 out.go:177] * [embed-certs-451000] minikube v1.29.0 on Darwin 13.2.1
	I0224 15:52:39.196389   46816 notify.go:220] Checking for updates...
	I0224 15:52:39.218251   46816 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 15:52:39.240655   46816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:52:39.262414   46816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 15:52:39.283386   46816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 15:52:39.304531   46816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 15:52:39.326389   46816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 15:52:39.347673   46816 config.go:182] Loaded profile config "embed-certs-451000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:52:39.348155   46816 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 15:52:39.409846   46816 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 15:52:39.409953   46816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:52:39.558629   46816 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 23:52:39.463230803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:52:39.579631   46816 out.go:177] * Using the docker driver based on existing profile
	I0224 15:52:39.600508   46816 start.go:296] selected driver: docker
	I0224 15:52:39.600537   46816 start.go:857] validating driver "docker" against &{Name:embed-certs-451000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-451000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:52:39.600669   46816 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 15:52:39.604337   46816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 15:52:39.786641   46816 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 23:52:39.654826433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 15:52:39.786804   46816 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 15:52:39.786823   46816 cni.go:84] Creating CNI manager for ""
	I0224 15:52:39.786836   46816 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 15:52:39.786842   46816 start_flags.go:319] config:
	{Name:embed-certs-451000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-451000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:52:39.808628   46816 out.go:177] * Starting control plane node embed-certs-451000 in cluster embed-certs-451000
	I0224 15:52:39.829505   46816 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 15:52:39.851356   46816 out.go:177] * Pulling base image ...
	I0224 15:52:39.894675   46816 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:52:39.894675   46816 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 15:52:39.894775   46816 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 15:52:39.894795   46816 cache.go:57] Caching tarball of preloaded images
	I0224 15:52:39.894995   46816 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 15:52:39.895014   46816 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 15:52:39.896034   46816 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/config.json ...
	I0224 15:52:39.951476   46816 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 15:52:39.951494   46816 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 15:52:39.951516   46816 cache.go:193] Successfully downloaded all kic artifacts
	I0224 15:52:39.951552   46816 start.go:364] acquiring machines lock for embed-certs-451000: {Name:mk1180e8e18ee3c3c31789e75837e8d1f00d0064 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 15:52:39.951633   46816 start.go:368] acquired machines lock for "embed-certs-451000" in 62.948µs
	I0224 15:52:39.951662   46816 start.go:96] Skipping create...Using existing machine configuration
	I0224 15:52:39.951670   46816 fix.go:55] fixHost starting: 
	I0224 15:52:39.951940   46816 cli_runner.go:164] Run: docker container inspect embed-certs-451000 --format={{.State.Status}}
	I0224 15:52:40.009812   46816 fix.go:103] recreateIfNeeded on embed-certs-451000: state=Stopped err=<nil>
	W0224 15:52:40.009858   46816 fix.go:129] unexpected machine state, will restart: <nil>
	I0224 15:52:40.053515   46816 out.go:177] * Restarting existing docker container for "embed-certs-451000" ...
	I0224 15:52:40.075725   46816 cli_runner.go:164] Run: docker start embed-certs-451000
	I0224 15:52:40.410047   46816 cli_runner.go:164] Run: docker container inspect embed-certs-451000 --format={{.State.Status}}
	I0224 15:52:40.471076   46816 kic.go:426] container "embed-certs-451000" state is running.
	I0224 15:52:40.471697   46816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-451000
	I0224 15:52:40.537303   46816 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/config.json ...
	I0224 15:52:40.537871   46816 machine.go:88] provisioning docker machine ...
	I0224 15:52:40.537921   46816 ubuntu.go:169] provisioning hostname "embed-certs-451000"
	I0224 15:52:40.538060   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:40.608413   46816 main.go:141] libmachine: Using SSH client type: native
	I0224 15:52:40.608861   46816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61911 <nil> <nil>}
	I0224 15:52:40.608876   46816 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-451000 && echo "embed-certs-451000" | sudo tee /etc/hostname
	I0224 15:52:40.765516   46816 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-451000
	
	I0224 15:52:40.765607   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:40.828987   46816 main.go:141] libmachine: Using SSH client type: native
	I0224 15:52:40.829360   46816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61911 <nil> <nil>}
	I0224 15:52:40.829373   46816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-451000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-451000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-451000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 15:52:40.963814   46816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:52:40.963837   46816 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 15:52:40.963854   46816 ubuntu.go:177] setting up certificates
	I0224 15:52:40.963862   46816 provision.go:83] configureAuth start
	I0224 15:52:40.963948   46816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-451000
	I0224 15:52:41.023734   46816 provision.go:138] copyHostCerts
	I0224 15:52:41.023840   46816 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 15:52:41.023851   46816 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 15:52:41.023959   46816 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 15:52:41.024202   46816 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 15:52:41.024210   46816 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 15:52:41.024278   46816 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 15:52:41.024439   46816 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 15:52:41.024445   46816 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 15:52:41.024515   46816 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 15:52:41.024646   46816 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.embed-certs-451000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-451000]
	I0224 15:52:41.096079   46816 provision.go:172] copyRemoteCerts
	I0224 15:52:41.096152   46816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 15:52:41.096204   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:41.154524   46816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61911 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/embed-certs-451000/id_rsa Username:docker}
	I0224 15:52:41.250954   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 15:52:41.268234   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0224 15:52:41.285945   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 15:52:41.304813   46816 provision.go:86] duration metric: configureAuth took 340.93717ms
	I0224 15:52:41.304829   46816 ubuntu.go:193] setting minikube options for container-runtime
	I0224 15:52:41.304975   46816 config.go:182] Loaded profile config "embed-certs-451000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:52:41.305044   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:41.364696   46816 main.go:141] libmachine: Using SSH client type: native
	I0224 15:52:41.365058   46816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61911 <nil> <nil>}
	I0224 15:52:41.365070   46816 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 15:52:41.499769   46816 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 15:52:41.499786   46816 ubuntu.go:71] root file system type: overlay
	I0224 15:52:41.499898   46816 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 15:52:41.499986   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:41.558609   46816 main.go:141] libmachine: Using SSH client type: native
	I0224 15:52:41.558962   46816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61911 <nil> <nil>}
	I0224 15:52:41.559013   46816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 15:52:41.702556   46816 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 15:52:41.702642   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:41.761181   46816 main.go:141] libmachine: Using SSH client type: native
	I0224 15:52:41.761559   46816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 61911 <nil> <nil>}
	I0224 15:52:41.761572   46816 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 15:52:41.901908   46816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 15:52:41.901923   46816 machine.go:91] provisioned docker machine in 1.364031381s
	I0224 15:52:41.901933   46816 start.go:300] post-start starting for "embed-certs-451000" (driver="docker")
	I0224 15:52:41.901938   46816 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 15:52:41.902017   46816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 15:52:41.902075   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:41.984369   46816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61911 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/embed-certs-451000/id_rsa Username:docker}
	I0224 15:52:42.077280   46816 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 15:52:42.080969   46816 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 15:52:42.080987   46816 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 15:52:42.080994   46816 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 15:52:42.081003   46816 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 15:52:42.081010   46816 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 15:52:42.081104   46816 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 15:52:42.081265   46816 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 15:52:42.081427   46816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 15:52:42.088922   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:52:42.106269   46816 start.go:303] post-start completed in 204.323021ms
	I0224 15:52:42.106381   46816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:52:42.106435   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:42.164048   46816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61911 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/embed-certs-451000/id_rsa Username:docker}
	I0224 15:52:42.257039   46816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 15:52:42.261762   46816 fix.go:57] fixHost completed within 2.310069205s
	I0224 15:52:42.261778   46816 start.go:83] releasing machines lock for "embed-certs-451000", held for 2.31011791s
	I0224 15:52:42.261871   46816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-451000
	I0224 15:52:42.319992   46816 ssh_runner.go:195] Run: cat /version.json
	I0224 15:52:42.320025   46816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 15:52:42.320069   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:42.320103   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:42.381179   46816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61911 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/embed-certs-451000/id_rsa Username:docker}
	I0224 15:52:42.381246   46816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61911 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/embed-certs-451000/id_rsa Username:docker}
	I0224 15:52:42.530094   46816 ssh_runner.go:195] Run: systemctl --version
	I0224 15:52:42.534917   46816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 15:52:42.540319   46816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 15:52:42.556968   46816 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 15:52:42.557052   46816 ssh_runner.go:195] Run: which cri-dockerd
	I0224 15:52:42.560961   46816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 15:52:42.568383   46816 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 15:52:42.581408   46816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 15:52:42.589095   46816 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0224 15:52:42.589108   46816 start.go:485] detecting cgroup driver to use...
	I0224 15:52:42.589119   46816 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:52:42.589211   46816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:52:42.602548   46816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 15:52:42.611182   46816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 15:52:42.619591   46816 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 15:52:42.619658   46816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 15:52:42.628348   46816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:52:42.636818   46816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 15:52:42.645225   46816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 15:52:42.653648   46816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 15:52:42.661558   46816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 15:52:42.670037   46816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 15:52:42.677224   46816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 15:52:42.684377   46816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:52:42.756090   46816 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 15:52:42.827390   46816 start.go:485] detecting cgroup driver to use...
	I0224 15:52:42.827414   46816 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 15:52:42.827479   46816 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 15:52:42.842407   46816 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 15:52:42.842470   46816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 15:52:42.853646   46816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 15:52:42.868260   46816 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 15:52:42.967908   46816 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 15:52:43.067696   46816 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 15:52:43.067715   46816 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 15:52:43.081325   46816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:52:43.167827   46816 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 15:52:43.466653   46816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:52:43.536231   46816 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 15:52:43.605213   46816 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 15:52:43.672856   46816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 15:52:43.744156   46816 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 15:52:43.755796   46816 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 15:52:43.755879   46816 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 15:52:43.759894   46816 start.go:553] Will wait 60s for crictl version
	I0224 15:52:43.759950   46816 ssh_runner.go:195] Run: which crictl
	I0224 15:52:43.763557   46816 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 15:52:43.874456   46816 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 15:52:43.874541   46816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:52:43.899613   46816 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 15:52:43.969280   46816 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 15:52:43.969407   46816 cli_runner.go:164] Run: docker exec -t embed-certs-451000 dig +short host.docker.internal
	I0224 15:52:44.075904   46816 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 15:52:44.076040   46816 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 15:52:44.080659   46816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:52:44.090783   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:44.169704   46816 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 15:52:44.169792   46816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:52:44.190781   46816 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0224 15:52:44.190797   46816 docker.go:560] Images already preloaded, skipping extraction
	I0224 15:52:44.190871   46816 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 15:52:44.210896   46816 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0224 15:52:44.210914   46816 cache_images.go:84] Images are preloaded, skipping loading
	I0224 15:52:44.210995   46816 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 15:52:44.236934   46816 cni.go:84] Creating CNI manager for ""
	I0224 15:52:44.236952   46816 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 15:52:44.236970   46816 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 15:52:44.236987   46816 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-451000 NodeName:embed-certs-451000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 15:52:44.237126   46816 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-451000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 15:52:44.237197   46816 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-451000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-451000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 15:52:44.237261   46816 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 15:52:44.245342   46816 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 15:52:44.245403   46816 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 15:52:44.252801   46816 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0224 15:52:44.266059   46816 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 15:52:44.279110   46816 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0224 15:52:44.292032   46816 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0224 15:52:44.295880   46816 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 15:52:44.305930   46816 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000 for IP: 192.168.67.2
	I0224 15:52:44.305948   46816 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:52:44.306131   46816 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 15:52:44.306195   46816 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 15:52:44.306285   46816 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/client.key
	I0224 15:52:44.306350   46816 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/apiserver.key.c7fa3a9e
	I0224 15:52:44.306401   46816 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/proxy-client.key
	I0224 15:52:44.306607   46816 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 15:52:44.306644   46816 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 15:52:44.306655   46816 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 15:52:44.306689   46816 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 15:52:44.306725   46816 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 15:52:44.306760   46816 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 15:52:44.306829   46816 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 15:52:44.307453   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 15:52:44.325004   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 15:52:44.342471   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 15:52:44.360163   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/embed-certs-451000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 15:52:44.377527   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 15:52:44.394977   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 15:52:44.412441   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 15:52:44.429952   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 15:52:44.447725   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 15:52:44.464895   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 15:52:44.482360   46816 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 15:52:44.499894   46816 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 15:52:44.512928   46816 ssh_runner.go:195] Run: openssl version
	I0224 15:52:44.518556   46816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 15:52:44.527021   46816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:52:44.531318   46816 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:52:44.531368   46816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 15:52:44.537123   46816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 15:52:44.544716   46816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 15:52:44.552834   46816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 15:52:44.556809   46816 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 15:52:44.556855   46816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 15:52:44.562370   46816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 15:52:44.570140   46816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 15:52:44.578560   46816 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 15:52:44.582735   46816 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 15:52:44.582787   46816 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 15:52:44.588229   46816 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 15:52:44.596022   46816 kubeadm.go:401] StartCluster: {Name:embed-certs-451000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-451000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 15:52:44.596139   46816 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:52:44.616716   46816 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 15:52:44.625147   46816 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0224 15:52:44.625161   46816 kubeadm.go:633] restartCluster start
	I0224 15:52:44.625213   46816 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 15:52:44.632410   46816 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:44.632478   46816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-451000
	I0224 15:52:44.691957   46816 kubeconfig.go:135] verify returned: extract IP: "embed-certs-451000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 15:52:44.692147   46816 kubeconfig.go:146] "embed-certs-451000" context is missing from /Users/jenkins/minikube-integration/15909-26406/kubeconfig - will repair!
	I0224 15:52:44.692459   46816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/kubeconfig: {Name:mk1182fefc6ba3b7ea2d2356c47127703005a4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 15:52:44.694098   46816 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 15:52:44.702047   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:44.702109   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:44.710960   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:45.211755   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:45.211925   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:45.224293   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:45.711193   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:45.711369   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:45.722745   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:46.211279   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:46.211404   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:46.221840   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:46.712425   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:46.712593   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:46.724061   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:47.213128   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:47.213282   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:47.224333   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:47.712934   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:47.713037   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:47.722934   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:48.211969   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:48.212140   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:48.223483   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:48.713262   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:48.713390   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:48.724985   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:49.211347   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:49.211466   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:49.222415   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:49.713151   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:49.713305   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:49.724577   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:50.212550   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:50.212713   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:50.224033   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:50.713105   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:50.713249   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:50.723375   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:51.213271   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:51.213375   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:51.224486   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:51.712607   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:51.712752   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:51.724133   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:52.211707   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:52.211858   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:52.222889   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:52.713164   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:52.713330   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:52.724841   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:53.213168   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:53.213364   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:53.224439   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:53.711435   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:53.711577   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:53.722741   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:54.212599   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:54.212718   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:54.224211   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:54.712198   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:54.712360   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:54.723654   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:54.723665   46816 api_server.go:165] Checking apiserver status ...
	I0224 15:52:54.723718   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 15:52:54.732280   46816 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:54.732293   46816 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0224 15:52:54.732301   46816 kubeadm.go:1120] stopping kube-system containers ...
	I0224 15:52:54.732385   46816 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 15:52:54.752662   46816 docker.go:456] Stopping containers: [4d030352432d ea72df2fda8e 66cbccb45072 27db6bf3bad1 6d03d3e89a29 a02871ea4df6 290ede76f363 5e2916bdb280 b394ed012da0 6d2526af966b 11b5d7d15551 77c57839ff16 0103ac15677f df0048516d47 deac0e67b5ea]
	I0224 15:52:54.752747   46816 ssh_runner.go:195] Run: docker stop 4d030352432d ea72df2fda8e 66cbccb45072 27db6bf3bad1 6d03d3e89a29 a02871ea4df6 290ede76f363 5e2916bdb280 b394ed012da0 6d2526af966b 11b5d7d15551 77c57839ff16 0103ac15677f df0048516d47 deac0e67b5ea
	I0224 15:52:54.773250   46816 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 15:52:54.783842   46816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 15:52:54.791696   46816 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb 24 23:51 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 24 23:51 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb 24 23:51 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb 24 23:51 /etc/kubernetes/scheduler.conf
	
	I0224 15:52:54.791756   46816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 15:52:54.799511   46816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 15:52:54.807210   46816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 15:52:54.814597   46816 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:54.814653   46816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 15:52:54.821945   46816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 15:52:54.829368   46816 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:52:54.829423   46816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 15:52:54.836539   46816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 15:52:54.844169   46816 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0224 15:52:54.844181   46816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:52:54.898286   46816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:52:55.309211   46816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:52:55.445993   46816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:52:55.503845   46816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:52:55.592334   46816 api_server.go:51] waiting for apiserver process to appear ...
	I0224 15:52:55.592416   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:52:56.155442   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:52:56.655507   46816 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:52:56.668189   46816 api_server.go:71] duration metric: took 1.075851252s to wait for apiserver process to appear ...
	I0224 15:52:56.668205   46816 api_server.go:87] waiting for apiserver healthz status ...
	I0224 15:52:56.668219   46816 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61910/healthz ...
	I0224 15:52:56.669612   46816 api_server.go:268] stopped: https://127.0.0.1:61910/healthz: Get "https://127.0.0.1:61910/healthz": EOF
	I0224 15:52:57.170228   46816 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61910/healthz ...
	I0224 15:52:59.531108   46816 api_server.go:278] https://127.0.0.1:61910/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 15:52:59.531133   46816 api_server.go:102] status: https://127.0.0.1:61910/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 15:52:59.669755   46816 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61910/healthz ...
	I0224 15:52:59.675434   46816 api_server.go:278] https://127.0.0.1:61910/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 15:52:59.675454   46816 api_server.go:102] status: https://127.0.0.1:61910/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:53:00.170725   46816 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61910/healthz ...
	I0224 15:53:00.177928   46816 api_server.go:278] https://127.0.0.1:61910/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 15:53:00.177942   46816 api_server.go:102] status: https://127.0.0.1:61910/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:53:00.669754   46816 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61910/healthz ...
	I0224 15:53:00.674950   46816 api_server.go:278] https://127.0.0.1:61910/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 15:53:00.674966   46816 api_server.go:102] status: https://127.0.0.1:61910/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 15:53:01.169976   46816 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:61910/healthz ...
	I0224 15:53:01.177233   46816 api_server.go:278] https://127.0.0.1:61910/healthz returned 200:
	ok
	I0224 15:53:01.184442   46816 api_server.go:140] control plane version: v1.26.1
	I0224 15:53:01.184456   46816 api_server.go:130] duration metric: took 4.516204663s to wait for apiserver health ...
	I0224 15:53:01.184465   46816 cni.go:84] Creating CNI manager for ""
	I0224 15:53:01.184476   46816 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 15:53:01.207602   46816 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 15:53:01.228944   46816 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 15:53:01.238073   46816 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0224 15:53:01.255075   46816 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 15:53:01.263257   46816 system_pods.go:59] 8 kube-system pods found
	I0224 15:53:01.263274   46816 system_pods.go:61] "coredns-787d4945fb-ftwqg" [09680948-ed7e-4dfe-8376-caa6f2e05c8b] Running
	I0224 15:53:01.263280   46816 system_pods.go:61] "etcd-embed-certs-451000" [5db77283-297c-458a-9f42-eea5e08c9779] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 15:53:01.263283   46816 system_pods.go:61] "kube-apiserver-embed-certs-451000" [0185eaa0-8746-4c9b-af0f-185b0914f492] Running
	I0224 15:53:01.263288   46816 system_pods.go:61] "kube-controller-manager-embed-certs-451000" [473b12f3-b1fa-454d-94f9-1529a66cbc54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0224 15:53:01.263291   46816 system_pods.go:61] "kube-proxy-28r72" [26c469a6-0bf5-4ad1-824b-e9cf8936b60f] Running
	I0224 15:53:01.263295   46816 system_pods.go:61] "kube-scheduler-embed-certs-451000" [601785fa-1648-4d45-aaaa-456946e6ae81] Running
	I0224 15:53:01.263301   46816 system_pods.go:61] "metrics-server-7997d45854-pc475" [2016a8cf-709a-49d6-a35e-2bd62f421d90] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 15:53:01.263304   46816 system_pods.go:61] "storage-provisioner" [7051016a-1fa8-462c-b8cc-0e85abad27cf] Running
	I0224 15:53:01.263311   46816 system_pods.go:74] duration metric: took 8.221552ms to wait for pod list to return data ...
	I0224 15:53:01.263317   46816 node_conditions.go:102] verifying NodePressure condition ...
	I0224 15:53:01.267585   46816 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 15:53:01.267601   46816 node_conditions.go:123] node cpu capacity is 6
	I0224 15:53:01.267611   46816 node_conditions.go:105] duration metric: took 4.289535ms to run NodePressure ...
	I0224 15:53:01.267624   46816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 15:53:01.498490   46816 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0224 15:53:01.552228   46816 kubeadm.go:784] kubelet initialised
	I0224 15:53:01.552242   46816 kubeadm.go:785] duration metric: took 53.736069ms waiting for restarted kubelet to initialise ...
	I0224 15:53:01.552249   46816 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 15:53:01.559129   46816 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-ftwqg" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:01.569653   46816 pod_ready.go:92] pod "coredns-787d4945fb-ftwqg" in "kube-system" namespace has status "Ready":"True"
	I0224 15:53:01.569664   46816 pod_ready.go:81] duration metric: took 10.521126ms waiting for pod "coredns-787d4945fb-ftwqg" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:01.569670   46816 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-451000" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:03.582795   46816 pod_ready.go:102] pod "etcd-embed-certs-451000" in "kube-system" namespace has status "Ready":"False"
	I0224 15:53:05.584469   46816 pod_ready.go:102] pod "etcd-embed-certs-451000" in "kube-system" namespace has status "Ready":"False"
	I0224 15:53:08.083550   46816 pod_ready.go:102] pod "etcd-embed-certs-451000" in "kube-system" namespace has status "Ready":"False"
	I0224 15:53:10.083999   46816 pod_ready.go:92] pod "etcd-embed-certs-451000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:53:10.084013   46816 pod_ready.go:81] duration metric: took 8.514262253s waiting for pod "etcd-embed-certs-451000" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:10.084019   46816 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-451000" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:12.095297   46816 pod_ready.go:102] pod "kube-apiserver-embed-certs-451000" in "kube-system" namespace has status "Ready":"False"
	I0224 15:53:14.095729   46816 pod_ready.go:92] pod "kube-apiserver-embed-certs-451000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:53:14.095742   46816 pod_ready.go:81] duration metric: took 4.011673934s waiting for pod "kube-apiserver-embed-certs-451000" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:14.095749   46816 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-451000" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:14.100976   46816 pod_ready.go:92] pod "kube-controller-manager-embed-certs-451000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:53:14.100986   46816 pod_ready.go:81] duration metric: took 5.231141ms waiting for pod "kube-controller-manager-embed-certs-451000" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:14.100992   46816 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-28r72" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:14.105779   46816 pod_ready.go:92] pod "kube-proxy-28r72" in "kube-system" namespace has status "Ready":"True"
	I0224 15:53:14.105787   46816 pod_ready.go:81] duration metric: took 4.790898ms waiting for pod "kube-proxy-28r72" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:14.105793   46816 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-451000" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:14.112664   46816 pod_ready.go:92] pod "kube-scheduler-embed-certs-451000" in "kube-system" namespace has status "Ready":"True"
	I0224 15:53:14.112675   46816 pod_ready.go:81] duration metric: took 6.877707ms waiting for pod "kube-scheduler-embed-certs-451000" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:14.112682   46816 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-pc475" in "kube-system" namespace to be "Ready" ...
	I0224 15:53:16.914630   46043 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0224 15:53:16.914882   46043 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0224 15:53:16.914899   46043 kubeadm.go:322] 
	I0224 15:53:16.914979   46043 kubeadm.go:322] Unfortunately, an error has occurred:
	I0224 15:53:16.915043   46043 kubeadm.go:322] 	timed out waiting for the condition
	I0224 15:53:16.915054   46043 kubeadm.go:322] 
	I0224 15:53:16.915113   46043 kubeadm.go:322] This error is likely caused by:
	I0224 15:53:16.915150   46043 kubeadm.go:322] 	- The kubelet is not running
	I0224 15:53:16.915294   46043 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0224 15:53:16.915309   46043 kubeadm.go:322] 
	I0224 15:53:16.915480   46043 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0224 15:53:16.915537   46043 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0224 15:53:16.915594   46043 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0224 15:53:16.915604   46043 kubeadm.go:322] 
	I0224 15:53:16.915722   46043 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0224 15:53:16.915838   46043 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0224 15:53:16.915942   46043 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0224 15:53:16.915990   46043 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0224 15:53:16.916082   46043 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0224 15:53:16.916110   46043 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0224 15:53:16.918426   46043 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0224 15:53:16.918490   46043 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0224 15:53:16.918602   46043 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0224 15:53:16.918692   46043 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 15:53:16.918768   46043 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0224 15:53:16.918833   46043 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0224 15:53:16.918853   46043 kubeadm.go:403] StartCluster complete in 8m4.70614135s
	I0224 15:53:16.918948   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0224 15:53:16.939129   46043 logs.go:277] 0 containers: []
	W0224 15:53:16.939142   46043 logs.go:279] No container was found matching "kube-apiserver"
	I0224 15:53:16.939209   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0224 15:53:16.127208   46816 pod_ready.go:102] pod "metrics-server-7997d45854-pc475" in "kube-system" namespace has status "Ready":"False"
	I0224 15:53:18.624692   46816 pod_ready.go:102] pod "metrics-server-7997d45854-pc475" in "kube-system" namespace has status "Ready":"False"
	I0224 15:53:16.959355   46043 logs.go:277] 0 containers: []
	W0224 15:53:16.975579   46043 logs.go:279] No container was found matching "etcd"
	I0224 15:53:16.975678   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0224 15:53:16.995914   46043 logs.go:277] 0 containers: []
	W0224 15:53:16.995928   46043 logs.go:279] No container was found matching "coredns"
	I0224 15:53:16.995998   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0224 15:53:17.017414   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.017429   46043 logs.go:279] No container was found matching "kube-scheduler"
	I0224 15:53:17.017502   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0224 15:53:17.053061   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.053074   46043 logs.go:279] No container was found matching "kube-proxy"
	I0224 15:53:17.053140   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0224 15:53:17.072468   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.072481   46043 logs.go:279] No container was found matching "kube-controller-manager"
	I0224 15:53:17.072550   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0224 15:53:17.092166   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.092180   46043 logs.go:279] No container was found matching "kindnet"
	I0224 15:53:17.092247   46043 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0224 15:53:17.111270   46043 logs.go:277] 0 containers: []
	W0224 15:53:17.111293   46043 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0224 15:53:17.111305   46043 logs.go:123] Gathering logs for kubelet ...
	I0224 15:53:17.111313   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0224 15:53:17.154353   46043 logs.go:123] Gathering logs for dmesg ...
	I0224 15:53:17.154371   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0224 15:53:17.167202   46043 logs.go:123] Gathering logs for describe nodes ...
	I0224 15:53:17.167216   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0224 15:53:17.219728   46043 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0224 15:53:17.219738   46043 logs.go:123] Gathering logs for Docker ...
	I0224 15:53:17.219745   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0224 15:53:17.240860   46043 logs.go:123] Gathering logs for container status ...
	I0224 15:53:17.240874   46043 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0224 15:53:19.285624   46043 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044720147s)
	W0224 15:53:19.285734   46043 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0224 15:53:19.285749   46043 out.go:239] * 
	W0224 15:53:19.285872   46043 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 15:53:19.285893   46043 out.go:239] * 
	W0224 15:53:19.286556   46043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0224 15:53:19.329163   46043 out.go:177] 
	W0224 15:53:19.371265   46043 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0224 15:53:19.371366   46043 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0224 15:53:19.371412   46043 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0224 15:53:19.430248   46043 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 23:45:08 UTC, end at Fri 2023-02-24 23:53:20 UTC. --
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356016738Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356576667Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356628748Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357726958Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357784747Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357825944Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357836059Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357890010Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357942974Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357965911Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357981566Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357993788Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358105260Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358296891Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358366239Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358900961Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.366092775Z" level=info msg="Loading containers: start."
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.444603684Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.477227417Z" level=info msg="Loading containers: done."
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.485430316Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.485498171Z" level=info msg="Daemon has completed initialization"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.506662985Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.510654920Z" level=info msg="API listen on [::]:2376"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.516789696Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-02-24T23:53:22Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:53:23 up  2:52,  0 users,  load average: 1.45, 1.27, 1.18
	Linux old-k8s-version-583000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 23:45:08 UTC, end at Fri 2023-02-24 23:53:23 UTC. --
	Feb 24 23:53:21 old-k8s-version-583000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 23:53:22 old-k8s-version-583000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Feb 24 23:53:22 old-k8s-version-583000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 23:53:22 old-k8s-version-583000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 23:53:22 old-k8s-version-583000 kubelet[13998]: I0224 23:53:22.307783   13998 server.go:410] Version: v1.16.0
	Feb 24 23:53:22 old-k8s-version-583000 kubelet[13998]: I0224 23:53:22.308329   13998 plugins.go:100] No cloud provider specified.
	Feb 24 23:53:22 old-k8s-version-583000 kubelet[13998]: I0224 23:53:22.308378   13998 server.go:773] Client rotation is on, will bootstrap in background
	Feb 24 23:53:22 old-k8s-version-583000 kubelet[13998]: I0224 23:53:22.310466   13998 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 23:53:22 old-k8s-version-583000 kubelet[13998]: W0224 23:53:22.311186   13998 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 24 23:53:22 old-k8s-version-583000 kubelet[13998]: W0224 23:53:22.311265   13998 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 24 23:53:22 old-k8s-version-583000 kubelet[13998]: F0224 23:53:22.311290   13998 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 24 23:53:22 old-k8s-version-583000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 23:53:22 old-k8s-version-583000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 24 23:53:22 old-k8s-version-583000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Feb 24 23:53:22 old-k8s-version-583000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 24 23:53:22 old-k8s-version-583000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 24 23:53:23 old-k8s-version-583000 kubelet[14012]: I0224 23:53:23.049568   14012 server.go:410] Version: v1.16.0
	Feb 24 23:53:23 old-k8s-version-583000 kubelet[14012]: I0224 23:53:23.049954   14012 plugins.go:100] No cloud provider specified.
	Feb 24 23:53:23 old-k8s-version-583000 kubelet[14012]: I0224 23:53:23.049991   14012 server.go:773] Client rotation is on, will bootstrap in background
	Feb 24 23:53:23 old-k8s-version-583000 kubelet[14012]: I0224 23:53:23.051732   14012 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 24 23:53:23 old-k8s-version-583000 kubelet[14012]: W0224 23:53:23.052392   14012 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 24 23:53:23 old-k8s-version-583000 kubelet[14012]: W0224 23:53:23.052469   14012 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 24 23:53:23 old-k8s-version-583000 kubelet[14012]: F0224 23:53:23.052496   14012 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 24 23:53:23 old-k8s-version-583000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 24 23:53:23 old-k8s-version-583000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:53:23.161673   46924 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 2 (405.002196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-583000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (497.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0224 15:53:30.357424   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:54:04.464815   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:54:20.889773   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:54:24.742287   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:54:55.143088   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:55:10.431055   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:55:47.847051   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:55:54.190064   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:56:01.146886   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:56:07.520486   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:56:10.310155   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:10.316052   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:10.326360   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:10.348459   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:10.389146   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:10.471297   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:10.631489   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:10.953617   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:11.594001   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:12.875374   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:15.435725   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:18.190160   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:56:20.556521   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:22.363771   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:56:30.797387   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
E0224 15:56:33.488055   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 15:56:35.289659   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:56:51.277805   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:57:30.701865   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:57:32.323040   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:57:41.503987   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:57:46.399191   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:57:58.423121   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:58:19.400008   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:58:30.445075   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:58:54.246960   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:59:09.453641   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:59:20.979113   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 15:59:24.831288   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 15:59:53.500889   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
E0224 15:59:55.231805   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:00:10.520350   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:00:44.033875   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:00:54.283353   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 16:01:01.240152   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:01:07.614449   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 16:01:10.402616   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:01:35.382689   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 16:01:38.092603   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:02:41.513075   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:02:46.408679   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 2 (417.268242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-583000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-583000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-583000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa",
	        "Created": "2023-02-24T23:39:27.334833203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 677863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:45:08.286781852Z",
	            "FinishedAt": "2023-02-24T23:45:05.367336651Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hosts",
	        "LogPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa-json.log",
	        "Name": "/old-k8s-version-583000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-583000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-583000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-583000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-583000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-583000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3947a0f2180836ac916ac27cc999772cb08b4096aeffcc7de4c5c0d9b03b291e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61760"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61761"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61762"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61759"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3947a0f21808",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-583000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9cf258058c71",
	                        "old-k8s-version-583000"
	                    ],
	                    "NetworkID": "2385c01da446b4577899b64cfb7c6c9559167bd3e5ad2b3a0a47d05890f119a8",
	                    "EndpointID": "4f2e701947effd2e133234f8a53a9152bf92712aec3312653e8d2e8dfb2ddc47",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 2 (412.564067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-583000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-583000 logs -n 25: (4.097713746s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-540000        | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:41 PST | 24 Feb 23 15:41 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p no-preload-540000                              | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:41 PST | 24 Feb 23 15:41 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-540000             | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:41 PST | 24 Feb 23 15:41 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-540000                              | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:41 PST | 24 Feb 23 15:51 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-583000   | old-k8s-version-583000       | jenkins | v1.29.0 | 24 Feb 23 15:43 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-583000                         | old-k8s-version-583000       | jenkins | v1.29.0 | 24 Feb 23 15:45 PST | 24 Feb 23 15:45 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-583000        | old-k8s-version-583000       | jenkins | v1.29.0 | 24 Feb 23 15:45 PST | 24 Feb 23 15:45 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-583000                         | old-k8s-version-583000       | jenkins | v1.29.0 | 24 Feb 23 15:45 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-540000 sudo                         | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-540000                              | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-540000                              | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-540000                              | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	| delete  | -p no-preload-540000                              | no-preload-540000            | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:51 PST |
	| start   | -p embed-certs-451000                             | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 15:51 PST | 24 Feb 23 15:52 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-451000       | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 15:52 PST | 24 Feb 23 15:52 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-451000                             | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 15:52 PST | 24 Feb 23 15:52 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-451000            | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 15:52 PST | 24 Feb 23 15:52 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-451000                             | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 15:52 PST | 24 Feb 23 16:01 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-451000 sudo                        | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-451000                             | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-451000                             | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-451000                             | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	| delete  | -p embed-certs-451000                             | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	| delete  | -p                                                | disable-driver-mounts-669000 | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	|         | disable-driver-mounts-669000                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:02 PST |                     |
	|         | default-k8s-diff-port-367000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 16:02:14
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 16:02:14.908158   47618 out.go:296] Setting OutFile to fd 1 ...
	I0224 16:02:14.908316   47618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 16:02:14.908321   47618 out.go:309] Setting ErrFile to fd 2...
	I0224 16:02:14.908325   47618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 16:02:14.908436   47618 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 16:02:14.909844   47618 out.go:303] Setting JSON to false
	I0224 16:02:14.928577   47618 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10908,"bootTime":1677272426,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 16:02:14.928648   47618 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 16:02:14.950631   47618 out.go:177] * [default-k8s-diff-port-367000] minikube v1.29.0 on Darwin 13.2.1
	I0224 16:02:14.994041   47618 notify.go:220] Checking for updates...
	I0224 16:02:14.994087   47618 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 16:02:15.015783   47618 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 16:02:15.037929   47618 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 16:02:15.059964   47618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 16:02:15.081780   47618 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 16:02:15.102792   47618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 16:02:15.125082   47618 config.go:182] Loaded profile config "old-k8s-version-583000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0224 16:02:15.125124   47618 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 16:02:15.185491   47618 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 16:02:15.185638   47618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 16:02:15.327000   47618 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-25 00:02:15.234932141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 16:02:15.350058   47618 out.go:177] * Using the docker driver based on user configuration
	I0224 16:02:15.392837   47618 start.go:296] selected driver: docker
	I0224 16:02:15.392891   47618 start.go:857] validating driver "docker" against <nil>
	I0224 16:02:15.392909   47618 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 16:02:15.396815   47618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 16:02:15.539390   47618 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-25 00:02:15.446150242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 16:02:15.539515   47618 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 16:02:15.539696   47618 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 16:02:15.561357   47618 out.go:177] * Using Docker Desktop driver with root privileges
	I0224 16:02:15.582894   47618 cni.go:84] Creating CNI manager for ""
	I0224 16:02:15.582933   47618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 16:02:15.582941   47618 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 16:02:15.582953   47618 start_flags.go:319] config:
	{Name:default-k8s-diff-port-367000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-367000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 16:02:15.625064   47618 out.go:177] * Starting control plane node default-k8s-diff-port-367000 in cluster default-k8s-diff-port-367000
	I0224 16:02:15.645954   47618 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 16:02:15.667257   47618 out.go:177] * Pulling base image ...
	I0224 16:02:15.689190   47618 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 16:02:15.689251   47618 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 16:02:15.689338   47618 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 16:02:15.689375   47618 cache.go:57] Caching tarball of preloaded images
	I0224 16:02:15.689613   47618 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 16:02:15.689628   47618 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 16:02:15.690736   47618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/config.json ...
	I0224 16:02:15.690876   47618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/config.json: {Name:mkde04332ce7f83baa534570271bddb9e34ede50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:15.747963   47618 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 16:02:15.747980   47618 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 16:02:15.748001   47618 cache.go:193] Successfully downloaded all kic artifacts
	I0224 16:02:15.748054   47618 start.go:364] acquiring machines lock for default-k8s-diff-port-367000: {Name:mk2a8c93c29a9280eaf9a026efd00254a4b5b937 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 16:02:15.748235   47618 start.go:368] acquired machines lock for "default-k8s-diff-port-367000" in 168.685µs
	I0224 16:02:15.748271   47618 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-367000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-367000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 16:02:15.748342   47618 start.go:125] createHost starting for "" (driver="docker")
	I0224 16:02:15.792006   47618 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0224 16:02:15.792466   47618 start.go:159] libmachine.API.Create for "default-k8s-diff-port-367000" (driver="docker")
	I0224 16:02:15.792514   47618 client.go:168] LocalClient.Create starting
	I0224 16:02:15.792731   47618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem
	I0224 16:02:15.792824   47618 main.go:141] libmachine: Decoding PEM data...
	I0224 16:02:15.792857   47618 main.go:141] libmachine: Parsing certificate...
	I0224 16:02:15.792978   47618 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem
	I0224 16:02:15.793039   47618 main.go:141] libmachine: Decoding PEM data...
	I0224 16:02:15.793055   47618 main.go:141] libmachine: Parsing certificate...
	I0224 16:02:15.793843   47618 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-367000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 16:02:15.849651   47618 cli_runner.go:211] docker network inspect default-k8s-diff-port-367000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 16:02:15.849756   47618 network_create.go:281] running [docker network inspect default-k8s-diff-port-367000] to gather additional debugging logs...
	I0224 16:02:15.849772   47618 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-367000
	W0224 16:02:15.904161   47618 cli_runner.go:211] docker network inspect default-k8s-diff-port-367000 returned with exit code 1
	I0224 16:02:15.904186   47618 network_create.go:284] error running [docker network inspect default-k8s-diff-port-367000]: docker network inspect default-k8s-diff-port-367000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-367000
	I0224 16:02:15.904198   47618 network_create.go:286] output of [docker network inspect default-k8s-diff-port-367000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-367000
	
	** /stderr **
	I0224 16:02:15.904281   47618 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 16:02:15.960250   47618 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 16:02:15.960587   47618 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000444cc0}
	I0224 16:02:15.960600   47618 network_create.go:123] attempt to create docker network default-k8s-diff-port-367000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0224 16:02:15.960667   47618 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-367000 default-k8s-diff-port-367000
	W0224 16:02:16.014595   47618 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-367000 default-k8s-diff-port-367000 returned with exit code 1
	W0224 16:02:16.014636   47618 network_create.go:148] failed to create docker network default-k8s-diff-port-367000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-367000 default-k8s-diff-port-367000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0224 16:02:16.014654   47618 network_create.go:115] failed to create docker network default-k8s-diff-port-367000 192.168.58.0/24, will retry: subnet is taken
	I0224 16:02:16.016055   47618 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0224 16:02:16.016364   47618 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f8c360}
	I0224 16:02:16.016379   47618 network_create.go:123] attempt to create docker network default-k8s-diff-port-367000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0224 16:02:16.016448   47618 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-367000 default-k8s-diff-port-367000
	I0224 16:02:16.105030   47618 network_create.go:107] docker network default-k8s-diff-port-367000 192.168.67.0/24 created
	I0224 16:02:16.105071   47618 kic.go:117] calculated static IP "192.168.67.2" for the "default-k8s-diff-port-367000" container
	I0224 16:02:16.105202   47618 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 16:02:16.161715   47618 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-367000 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-367000 --label created_by.minikube.sigs.k8s.io=true
	I0224 16:02:16.218291   47618 oci.go:103] Successfully created a docker volume default-k8s-diff-port-367000
	I0224 16:02:16.218407   47618 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-367000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-367000 --entrypoint /usr/bin/test -v default-k8s-diff-port-367000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0224 16:02:16.669817   47618 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-367000
	I0224 16:02:16.669850   47618 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 16:02:16.669866   47618 kic.go:190] Starting extracting preloaded images to volume ...
	I0224 16:02:16.669993   47618 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-367000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 16:02:23.304107   47618 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-367000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.633848176s)
	I0224 16:02:23.304127   47618 kic.go:199] duration metric: took 6.634062 seconds to extract preloaded images to volume
	I0224 16:02:23.304246   47618 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 16:02:23.451185   47618 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-367000 --name default-k8s-diff-port-367000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-367000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-367000 --network default-k8s-diff-port-367000 --ip 192.168.67.2 --volume default-k8s-diff-port-367000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0224 16:02:23.819488   47618 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-367000 --format={{.State.Running}}
	I0224 16:02:23.885715   47618 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-367000 --format={{.State.Status}}
	I0224 16:02:23.955358   47618 cli_runner.go:164] Run: docker exec default-k8s-diff-port-367000 stat /var/lib/dpkg/alternatives/iptables
	I0224 16:02:24.071021   47618 oci.go:144] the created container "default-k8s-diff-port-367000" has a running status.
	I0224 16:02:24.071055   47618 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/default-k8s-diff-port-367000/id_rsa...
	I0224 16:02:24.191992   47618 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/default-k8s-diff-port-367000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 16:02:24.301408   47618 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-367000 --format={{.State.Status}}
	I0224 16:02:24.362157   47618 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 16:02:24.362177   47618 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-367000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 16:02:24.470708   47618 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-367000 --format={{.State.Status}}
	I0224 16:02:24.529352   47618 machine.go:88] provisioning docker machine ...
	I0224 16:02:24.529394   47618 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-367000"
	I0224 16:02:24.529513   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:24.590371   47618 main.go:141] libmachine: Using SSH client type: native
	I0224 16:02:24.590781   47618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62541 <nil> <nil>}
	I0224 16:02:24.590800   47618 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-367000 && echo "default-k8s-diff-port-367000" | sudo tee /etc/hostname
	I0224 16:02:24.735566   47618 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-367000
	
	I0224 16:02:24.735668   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:24.794263   47618 main.go:141] libmachine: Using SSH client type: native
	I0224 16:02:24.794597   47618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62541 <nil> <nil>}
	I0224 16:02:24.794621   47618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-367000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-367000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-367000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 16:02:24.932145   47618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 16:02:24.934211   47618 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 16:02:24.934244   47618 ubuntu.go:177] setting up certificates
	I0224 16:02:24.934252   47618 provision.go:83] configureAuth start
	I0224 16:02:24.934334   47618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-367000
	I0224 16:02:24.991356   47618 provision.go:138] copyHostCerts
	I0224 16:02:24.991451   47618 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 16:02:24.991461   47618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 16:02:24.991596   47618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 16:02:24.991800   47618 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 16:02:24.991809   47618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 16:02:24.991877   47618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 16:02:24.992041   47618 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 16:02:24.992046   47618 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 16:02:24.992108   47618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 16:02:24.992221   47618 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-367000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-367000]
	I0224 16:02:25.181205   47618 provision.go:172] copyRemoteCerts
	I0224 16:02:25.181335   47618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 16:02:25.181445   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:25.241368   47618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62541 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/default-k8s-diff-port-367000/id_rsa Username:docker}
	I0224 16:02:25.335490   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 16:02:25.354712   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0224 16:02:25.375539   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0224 16:02:25.394952   47618 provision.go:86] duration metric: configureAuth took 460.651896ms
	I0224 16:02:25.394971   47618 ubuntu.go:193] setting minikube options for container-runtime
	I0224 16:02:25.395151   47618 config.go:182] Loaded profile config "default-k8s-diff-port-367000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 16:02:25.395227   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:25.455797   47618 main.go:141] libmachine: Using SSH client type: native
	I0224 16:02:25.456162   47618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62541 <nil> <nil>}
	I0224 16:02:25.456178   47618 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 16:02:25.593996   47618 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 16:02:25.594009   47618 ubuntu.go:71] root file system type: overlay
	I0224 16:02:25.594091   47618 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 16:02:25.594172   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:25.652424   47618 main.go:141] libmachine: Using SSH client type: native
	I0224 16:02:25.652790   47618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62541 <nil> <nil>}
	I0224 16:02:25.652836   47618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 16:02:25.798277   47618 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 16:02:25.798376   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:25.855538   47618 main.go:141] libmachine: Using SSH client type: native
	I0224 16:02:25.855892   47618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 62541 <nil> <nil>}
	I0224 16:02:25.855905   47618 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 16:02:26.504234   47618 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-25 00:02:25.796142009 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 16:02:26.504258   47618 machine.go:91] provisioned docker machine in 1.974827195s
	I0224 16:02:26.504264   47618 client.go:171] LocalClient.Create took 10.711423109s
	I0224 16:02:26.504280   47618 start.go:167] duration metric: libmachine.API.Create for "default-k8s-diff-port-367000" took 10.711496766s
	I0224 16:02:26.504288   47618 start.go:300] post-start starting for "default-k8s-diff-port-367000" (driver="docker")
	I0224 16:02:26.504293   47618 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 16:02:26.504365   47618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 16:02:26.504420   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:26.567237   47618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62541 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/default-k8s-diff-port-367000/id_rsa Username:docker}
	I0224 16:02:26.664156   47618 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 16:02:26.667817   47618 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 16:02:26.667833   47618 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 16:02:26.667840   47618 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 16:02:26.667847   47618 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 16:02:26.667858   47618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 16:02:26.667958   47618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 16:02:26.668168   47618 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 16:02:26.668383   47618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 16:02:26.675673   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 16:02:26.693295   47618 start.go:303] post-start completed in 188.988026ms
	I0224 16:02:26.693827   47618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-367000
	I0224 16:02:26.751600   47618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/config.json ...
	I0224 16:02:26.752027   47618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 16:02:26.752092   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:26.813133   47618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62541 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/default-k8s-diff-port-367000/id_rsa Username:docker}
	I0224 16:02:26.904759   47618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 16:02:26.909471   47618 start.go:128] duration metric: createHost completed in 11.160782068s
	I0224 16:02:26.909486   47618 start.go:83] releasing machines lock for "default-k8s-diff-port-367000", held for 11.160909148s
	I0224 16:02:26.909557   47618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-367000
	I0224 16:02:26.967396   47618 ssh_runner.go:195] Run: cat /version.json
	I0224 16:02:26.967411   47618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 16:02:26.967462   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:26.967482   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:27.031371   47618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62541 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/default-k8s-diff-port-367000/id_rsa Username:docker}
	I0224 16:02:27.031529   47618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62541 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/default-k8s-diff-port-367000/id_rsa Username:docker}
	I0224 16:02:27.123130   47618 ssh_runner.go:195] Run: systemctl --version
	I0224 16:02:27.176327   47618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 16:02:27.181573   47618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 16:02:27.202593   47618 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 16:02:27.202660   47618 ssh_runner.go:195] Run: which cri-dockerd
	I0224 16:02:27.207169   47618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 16:02:27.214864   47618 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 16:02:27.227898   47618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 16:02:27.243158   47618 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 16:02:27.243173   47618 start.go:485] detecting cgroup driver to use...
	I0224 16:02:27.243184   47618 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 16:02:27.243267   47618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 16:02:27.256746   47618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 16:02:27.265303   47618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 16:02:27.273884   47618 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 16:02:27.273945   47618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 16:02:27.282703   47618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 16:02:27.291502   47618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 16:02:27.300289   47618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 16:02:27.308816   47618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 16:02:27.316817   47618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 16:02:27.325473   47618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 16:02:27.332841   47618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 16:02:27.340059   47618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 16:02:27.406850   47618 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 16:02:27.481797   47618 start.go:485] detecting cgroup driver to use...
	I0224 16:02:27.481816   47618 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 16:02:27.481877   47618 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 16:02:27.494432   47618 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 16:02:27.494506   47618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 16:02:27.505506   47618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 16:02:27.520227   47618 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 16:02:27.595803   47618 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 16:02:27.690924   47618 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 16:02:27.690946   47618 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 16:02:27.705242   47618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 16:02:27.791261   47618 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 16:02:28.034883   47618 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 16:02:28.105351   47618 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 16:02:28.172932   47618 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 16:02:28.243851   47618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 16:02:28.308170   47618 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 16:02:28.320743   47618 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 16:02:28.320841   47618 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 16:02:28.325240   47618 start.go:553] Will wait 60s for crictl version
	I0224 16:02:28.325299   47618 ssh_runner.go:195] Run: which crictl
	I0224 16:02:28.329466   47618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 16:02:28.432304   47618 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 16:02:28.432388   47618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 16:02:28.457825   47618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 16:02:28.528513   47618 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 16:02:28.528740   47618 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-367000 dig +short host.docker.internal
	I0224 16:02:28.648047   47618 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 16:02:28.648174   47618 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 16:02:28.652734   47618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 16:02:28.662999   47618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-367000
	I0224 16:02:28.721657   47618 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 16:02:28.721733   47618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 16:02:28.741342   47618 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 16:02:28.741356   47618 docker.go:560] Images already preloaded, skipping extraction
	I0224 16:02:28.741434   47618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 16:02:28.761702   47618 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 16:02:28.761717   47618 cache_images.go:84] Images are preloaded, skipping loading
	I0224 16:02:28.761812   47618 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 16:02:28.788767   47618 cni.go:84] Creating CNI manager for ""
	I0224 16:02:28.788786   47618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 16:02:28.788807   47618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0224 16:02:28.788828   47618 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-367000 NodeName:default-k8s-diff-port-367000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 16:02:28.788960   47618 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-367000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 16:02:28.789033   47618 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-367000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-367000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0224 16:02:28.789105   47618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 16:02:28.797491   47618 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 16:02:28.797557   47618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 16:02:28.804990   47618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (460 bytes)
	I0224 16:02:28.817817   47618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 16:02:28.830944   47618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0224 16:02:28.844031   47618 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0224 16:02:28.848201   47618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 16:02:28.858169   47618 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000 for IP: 192.168.67.2
	I0224 16:02:28.858185   47618 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:28.858361   47618 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 16:02:28.858427   47618 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 16:02:28.858469   47618 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/client.key
	I0224 16:02:28.858482   47618 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/client.crt with IP's: []
	I0224 16:02:28.942035   47618 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/client.crt ...
	I0224 16:02:28.942045   47618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/client.crt: {Name:mka66500ffd8d5b5d90d8c93d405e60326a86df5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:28.942379   47618 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/client.key ...
	I0224 16:02:28.942390   47618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/client.key: {Name:mk6c4bf0f32311cafd6dcc693012a111b69d1ff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:28.942628   47618 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.key.c7fa3a9e
	I0224 16:02:28.942646   47618 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0224 16:02:29.344330   47618 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.crt.c7fa3a9e ...
	I0224 16:02:29.344349   47618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.crt.c7fa3a9e: {Name:mk1661effa15ae80f6697bb94610159ef0a84995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:29.344667   47618 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.key.c7fa3a9e ...
	I0224 16:02:29.344676   47618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.key.c7fa3a9e: {Name:mk38591fe0b0cc4a6056a9cb475ddc183563248b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:29.344885   47618 certs.go:333] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.crt
	I0224 16:02:29.345066   47618 certs.go:337] copying /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.key
	I0224 16:02:29.345239   47618 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/proxy-client.key
	I0224 16:02:29.345255   47618 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/proxy-client.crt with IP's: []
	I0224 16:02:29.778025   47618 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/proxy-client.crt ...
	I0224 16:02:29.778038   47618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/proxy-client.crt: {Name:mk7d5cd1d106dca742da4a944e9cd97628bde36c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:29.778324   47618 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/proxy-client.key ...
	I0224 16:02:29.778332   47618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/proxy-client.key: {Name:mk3ea43de9d749fafedd3ce5e577d77111401fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:29.778730   47618 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 16:02:29.778779   47618 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 16:02:29.778789   47618 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 16:02:29.778822   47618 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 16:02:29.778855   47618 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 16:02:29.778884   47618 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 16:02:29.778952   47618 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 16:02:29.779429   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 16:02:29.798469   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 16:02:29.816338   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 16:02:29.836363   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/default-k8s-diff-port-367000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 16:02:29.855686   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 16:02:29.874321   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 16:02:29.893545   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 16:02:29.911863   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 16:02:29.950292   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 16:02:29.968249   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 16:02:29.985905   47618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 16:02:30.003409   47618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 16:02:30.016724   47618 ssh_runner.go:195] Run: openssl version
	I0224 16:02:30.022500   47618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 16:02:30.030718   47618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 16:02:30.034715   47618 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 16:02:30.034762   47618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 16:02:30.040298   47618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 16:02:30.048505   47618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 16:02:30.056592   47618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 16:02:30.060617   47618 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 16:02:30.060664   47618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 16:02:30.066490   47618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 16:02:30.074847   47618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 16:02:30.083292   47618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 16:02:30.087618   47618 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 16:02:30.087671   47618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 16:02:30.093363   47618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 16:02:30.101556   47618 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-367000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-367000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 16:02:30.101669   47618 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 16:02:30.121118   47618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 16:02:30.129596   47618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 16:02:30.137217   47618 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0224 16:02:30.137309   47618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 16:02:30.144863   47618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 16:02:30.144888   47618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 16:02:30.195568   47618 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0224 16:02:30.195613   47618 kubeadm.go:322] [preflight] Running pre-flight checks
	I0224 16:02:30.304394   47618 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 16:02:30.304539   47618 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 16:02:30.304629   47618 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0224 16:02:30.438557   47618 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 16:02:30.480742   47618 out.go:204]   - Generating certificates and keys ...
	I0224 16:02:30.480803   47618 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0224 16:02:30.480884   47618 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0224 16:02:30.640397   47618 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 16:02:30.712334   47618 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0224 16:02:30.832214   47618 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0224 16:02:30.985449   47618 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0224 16:02:31.079962   47618 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0224 16:02:31.080209   47618 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-367000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0224 16:02:31.263774   47618 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0224 16:02:31.264043   47618 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-367000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0224 16:02:31.306079   47618 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 16:02:31.439554   47618 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 16:02:31.657481   47618 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0224 16:02:31.657547   47618 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 16:02:31.744839   47618 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 16:02:31.896911   47618 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 16:02:32.010934   47618 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 16:02:32.071703   47618 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 16:02:32.083201   47618 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 16:02:32.083860   47618 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 16:02:32.083916   47618 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0224 16:02:32.158547   47618 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 16:02:32.180177   47618 out.go:204]   - Booting up control plane ...
	I0224 16:02:32.180259   47618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 16:02:32.180338   47618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 16:02:32.180432   47618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 16:02:32.180499   47618 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 16:02:32.180659   47618 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0224 16:02:41.668180   47618 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.502965 seconds
	I0224 16:02:41.668334   47618 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 16:02:41.677507   47618 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 16:02:42.193275   47618 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 16:02:42.193453   47618 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-367000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 16:02:42.701966   47618 kubeadm.go:322] [bootstrap-token] Using token: ruu4hp.tgmr023l0uipj51k
	I0224 16:02:42.741811   47618 out.go:204]   - Configuring RBAC rules ...
	I0224 16:02:42.742083   47618 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 16:02:42.786951   47618 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 16:02:42.791898   47618 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 16:02:42.794003   47618 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 16:02:42.796114   47618 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 16:02:42.799168   47618 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 16:02:42.807140   47618 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 16:02:42.955589   47618 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0224 16:02:43.189965   47618 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0224 16:02:43.190622   47618 kubeadm.go:322] 
	I0224 16:02:43.190694   47618 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0224 16:02:43.190703   47618 kubeadm.go:322] 
	I0224 16:02:43.190782   47618 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0224 16:02:43.190792   47618 kubeadm.go:322] 
	I0224 16:02:43.190844   47618 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0224 16:02:43.190943   47618 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 16:02:43.191027   47618 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 16:02:43.191038   47618 kubeadm.go:322] 
	I0224 16:02:43.191127   47618 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0224 16:02:43.191139   47618 kubeadm.go:322] 
	I0224 16:02:43.191192   47618 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 16:02:43.191204   47618 kubeadm.go:322] 
	I0224 16:02:43.191276   47618 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0224 16:02:43.191358   47618 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 16:02:43.191461   47618 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 16:02:43.191473   47618 kubeadm.go:322] 
	I0224 16:02:43.191570   47618 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 16:02:43.191665   47618 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0224 16:02:43.191672   47618 kubeadm.go:322] 
	I0224 16:02:43.191738   47618 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token ruu4hp.tgmr023l0uipj51k \
	I0224 16:02:43.191845   47618 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e \
	I0224 16:02:43.191870   47618 kubeadm.go:322] 	--control-plane 
	I0224 16:02:43.191880   47618 kubeadm.go:322] 
	I0224 16:02:43.191953   47618 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0224 16:02:43.191959   47618 kubeadm.go:322] 
	I0224 16:02:43.192034   47618 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token ruu4hp.tgmr023l0uipj51k \
	I0224 16:02:43.192130   47618 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cacecc6f7c388e702864470f6a38e8a6741c457f3475ca4013420ff00791f37e 
	I0224 16:02:43.195711   47618 kubeadm.go:322] W0225 00:02:30.188135    1298 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0224 16:02:43.195839   47618 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0224 16:02:43.195926   47618 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 16:02:43.195937   47618 cni.go:84] Creating CNI manager for ""
	I0224 16:02:43.195946   47618 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 16:02:43.219267   47618 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 16:02:43.263198   47618 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 16:02:43.272264   47618 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0224 16:02:43.287608   47618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 16:02:43.287691   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=08976559d74fb9c2654733dc21cb8f9d9ec24374 minikube.k8s.io/name=default-k8s-diff-port-367000 minikube.k8s.io/updated_at=2023_02_24T16_02_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:43.287691   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:43.457006   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:43.473260   47618 ops.go:34] apiserver oom_adj: -16
	I0224 16:02:44.054830   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:44.554906   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:45.054818   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:45.555700   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:46.055138   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:46.555014   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:47.054890   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:47.554933   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:48.055784   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:48.556991   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:49.054907   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:49.557035   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:50.055013   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:50.555411   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:51.056811   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:51.555512   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:52.055218   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:52.555785   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:53.055142   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:53.555113   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:54.055356   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:54.555039   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:55.056767   47618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 16:02:55.130982   47618 kubeadm.go:1073] duration metric: took 11.843000794s to wait for elevateKubeSystemPrivileges.
	I0224 16:02:55.130999   47618 kubeadm.go:403] StartCluster complete in 25.028698034s
	I0224 16:02:55.131016   47618 settings.go:142] acquiring lock: {Name:mk61f6764f7c264302b01ffc8eee0ee0f10d20c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:55.131105   47618 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 16:02:55.131794   47618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/kubeconfig: {Name:mk1182fefc6ba3b7ea2d2356c47127703005a4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:02:55.158322   47618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 16:02:55.158352   47618 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0224 16:02:55.158415   47618 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-367000"
	I0224 16:02:55.158422   47618 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-367000"
	I0224 16:02:55.158429   47618 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-367000"
	I0224 16:02:55.158430   47618 config.go:182] Loaded profile config "default-k8s-diff-port-367000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 16:02:55.158441   47618 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-367000"
	I0224 16:02:55.158470   47618 host.go:66] Checking if "default-k8s-diff-port-367000" exists ...
	I0224 16:02:55.158697   47618 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-367000 --format={{.State.Status}}
	I0224 16:02:55.159312   47618 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-367000 --format={{.State.Status}}
	I0224 16:02:55.289195   47618 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 16:02:55.495977   47618 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 23:45:08 UTC, end at Sat 2023-02-25 00:02:56 UTC. --
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356016738Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356576667Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356628748Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357726958Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357784747Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357825944Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357836059Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357890010Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357942974Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357965911Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357981566Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357993788Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358105260Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358296891Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358366239Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358900961Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.366092775Z" level=info msg="Loading containers: start."
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.444603684Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.477227417Z" level=info msg="Loading containers: done."
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.485430316Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.485498171Z" level=info msg="Daemon has completed initialization"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.506662985Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.510654920Z" level=info msg="API listen on [::]:2376"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.516789696Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-25T00:02:58Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:02:58 up  3:02,  0 users,  load average: 0.68, 0.87, 1.00
	Linux old-k8s-version-583000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 23:45:08 UTC, end at Sat 2023-02-25 00:02:59 UTC. --
	Feb 25 00:02:57 old-k8s-version-583000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 25 00:02:57 old-k8s-version-583000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 25 00:02:57 old-k8s-version-583000 kubelet[24193]: I0225 00:02:57.642597   24193 server.go:410] Version: v1.16.0
	Feb 25 00:02:57 old-k8s-version-583000 kubelet[24193]: I0225 00:02:57.642778   24193 plugins.go:100] No cloud provider specified.
	Feb 25 00:02:57 old-k8s-version-583000 kubelet[24193]: I0225 00:02:57.642886   24193 server.go:773] Client rotation is on, will bootstrap in background
	Feb 25 00:02:57 old-k8s-version-583000 kubelet[24193]: I0225 00:02:57.644782   24193 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 25 00:02:57 old-k8s-version-583000 kubelet[24193]: W0225 00:02:57.647376   24193 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 25 00:02:57 old-k8s-version-583000 kubelet[24193]: W0225 00:02:57.647575   24193 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 25 00:02:57 old-k8s-version-583000 kubelet[24193]: F0225 00:02:57.647639   24193 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 25 00:02:57 old-k8s-version-583000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 25 00:02:57 old-k8s-version-583000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 25 00:02:58 old-k8s-version-583000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Feb 25 00:02:58 old-k8s-version-583000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 25 00:02:58 old-k8s-version-583000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 25 00:02:58 old-k8s-version-583000 kubelet[24207]: I0225 00:02:58.402767   24207 server.go:410] Version: v1.16.0
	Feb 25 00:02:58 old-k8s-version-583000 kubelet[24207]: I0225 00:02:58.403008   24207 plugins.go:100] No cloud provider specified.
	Feb 25 00:02:58 old-k8s-version-583000 kubelet[24207]: I0225 00:02:58.403017   24207 server.go:773] Client rotation is on, will bootstrap in background
	Feb 25 00:02:58 old-k8s-version-583000 kubelet[24207]: I0225 00:02:58.404706   24207 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 25 00:02:58 old-k8s-version-583000 kubelet[24207]: W0225 00:02:58.405427   24207 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 25 00:02:58 old-k8s-version-583000 kubelet[24207]: W0225 00:02:58.405536   24207 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 25 00:02:58 old-k8s-version-583000 kubelet[24207]: F0225 00:02:58.405610   24207 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 25 00:02:58 old-k8s-version-583000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 25 00:02:58 old-k8s-version-583000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 25 00:02:59 old-k8s-version-583000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 930.
	Feb 25 00:02:59 old-k8s-version-583000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 16:02:58.702402   47777 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 2 (405.43411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-583000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:04:20.988329   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
E0224 16:04:24.839615   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:04:55.242688   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:05:10.529005   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:05:54.290395   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:06:01.248567   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:06:07.622161   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 16:06:10.411236   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:06:35.390976   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:07:41.523359   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:07:46.417797   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:08:19.417719   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:08:30.461957   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0224 16:09:04.297804   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:61759/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:09:20.997273   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/false-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:09:24.849991   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:09:55.249672   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:10:10.539099   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:10:44.575579   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:10:54.299636   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:11:01.258002   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:11:07.630684   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:11:10.420818   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/no-preload-540000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0224 16:11:35.400111   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 2 (397.25426ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-583000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.463µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-583000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-583000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa",
	        "Created": "2023-02-24T23:39:27.334833203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 677863,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-24T23:45:08.286781852Z",
	            "FinishedAt": "2023-02-24T23:45:05.367336651Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hostname",
	        "HostsPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/hosts",
	        "LogPath": "/var/lib/docker/containers/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa/9cf258058c712ff0d19281911206a5b9dff66a4ac0958748a0ca2078b040fffa-json.log",
	        "Name": "/old-k8s-version-583000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-583000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-583000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07-init/diff:/var/lib/docker/overlay2/090bcc891b21f0d9b31c4a5b1a320c5e316289558e632a2b6c5e992d202fb20b/diff:/var/lib/docker/overlay2/d4038dc077274b7995ad7ae819cc855efc4eb5ece26de70285bfecf41ffeeaf0/diff:/var/lib/docker/overlay2/323d8b1f094a7a667b9ac22d681d79418dd180a1f48f4df11eb68204511a1900/diff:/var/lib/docker/overlay2/58c178b988b717cf5a66ca855d8eeafaeedbbecbb8d17b3c624da9d11da84298/diff:/var/lib/docker/overlay2/9cc1438791a51e9ed2a68dfdbebf6e6efe1b65ead288011935a3b80743da99b3/diff:/var/lib/docker/overlay2/57bff46aefba77057092b40bce71150843bf0396273131130e9807d1b4f49e64/diff:/var/lib/docker/overlay2/3e72f6db5681e2cc253f09aba7963765d8fd1533a1281f5bebd15cc32c6917eb/diff:/var/lib/docker/overlay2/6b9bf9d6e5a1d7e6bbba98ce5a8499b7d5783c593cc9748ea564e294b35db755/diff:/var/lib/docker/overlay2/ee777a22d8793697035ce27b4ea9217e5be9957b632769eaab350b54ab91ae9f/diff:/var/lib/docker/overlay2/e9e851
14730e02ef6096398efed46ec9a3917746bc11be0b94664e19be511d88/diff:/var/lib/docker/overlay2/5b998a77bcbd13fdcdb765faf9508458bc25089bd66cfda5b045dd1c00f81dde/diff:/var/lib/docker/overlay2/f9e79da15c2ae6e03de004791da8ab9e42fe890d60aee6de9f498a7ca5759c3b/diff:/var/lib/docker/overlay2/dc2cee5d7abc39c9318e6509d101802620afffde40cb40ce458a69eee2b596c7/diff:/var/lib/docker/overlay2/283655f6c15014eea16666a5dc2722a0affb0c63989379b417ffc172d79ab74e/diff:/var/lib/docker/overlay2/c123625f83f65d73fbae79bdcc7e7720e3afc660bd2bef55a0fcf5e6b0eed020/diff:/var/lib/docker/overlay2/e05ad0591b705fb7ebf2dbd767ed0d71b991218a02ddff77025837a5baab2181/diff:/var/lib/docker/overlay2/8324b7189c7dc594588028d0a241da19320051b98cf2eecbfb8e535dbe4828fd/diff:/var/lib/docker/overlay2/64d81e83771f64d5b5da39a8cba2980760fe27693c4e8f6be12bb75f8d7ee52a/diff:/var/lib/docker/overlay2/517cd4ce3dafee340299da3fac09378c877fc3becfa3f04eebc4ada282356790/diff:/var/lib/docker/overlay2/57e275726317c089e6a0ab3805ebdd3e762884089dc1dc3105b684b35c4f531e/diff:/var/lib/d
ocker/overlay2/861ff6c5ede4a603bbf860ae177b03a94994bef411ecf75e7c872ec757135fdc/diff:/var/lib/docker/overlay2/232c9bcdc5f0db2998256d21ae7ba58ca75e1dd4a4241797a4ea487263518a6a/diff:/var/lib/docker/overlay2/8d295896dcb1e09f3db53ef49b025df21f6e78b7ebb79f65a5fa27168702e8f0/diff:/var/lib/docker/overlay2/25ae29e984510ea508a8961c42d9a34730690423007d852d55a86191b21be913/diff:/var/lib/docker/overlay2/1a4879fcf6445660156e579037a1d9e6e2ddb1724d86797fa3ed3a1008d52950/diff:/var/lib/docker/overlay2/7b5fe07a6af400157cd971cd3fcd205488943ef28bed0f4306f1384221d0014b/diff:/var/lib/docker/overlay2/066cc6a66f3ee4d02cab8d3abd0f45f3c22fcdfa77adc5a82d8e526dbf3de5c0/diff:/var/lib/docker/overlay2/6c6c18c3d0df0e0add96377a7dfedf0513565ea145aa99ee672dc6b903d1a77d/diff:/var/lib/docker/overlay2/6861357e8485926b15888713d18c560aaa7d42d1278899a902d66234091e8a9d/diff:/var/lib/docker/overlay2/649cf18532e44c7c06d4480b7b048a072e4332bb7a1705ba0bcd418dd38ce0b9/diff:/var/lib/docker/overlay2/e655d5600a6a88950aa7ec0f38af04d2bda578ac23dd44970e8fb13703d
d08b7/diff:/var/lib/docker/overlay2/9c716cb8f3100de3f4cdbea2f4d7af8fa502516b6135097a1709296469f181e5/diff:/var/lib/docker/overlay2/18dac52ec743664ef1a9ca7c093035ab25db9c736fcec32e8b38d8db4434157e/diff:/var/lib/docker/overlay2/87e6a678acd66787264fe25f8e2fd1840ef476f6b7d969f16168694d101afe0b/diff:/var/lib/docker/overlay2/dca590590d1fb513a0d455929601ee877c0b9b6c247d06f1d11f83e871595f79/diff:/var/lib/docker/overlay2/157a7e690ef104d198b536028ab39be1c7b357a428d4ce8278aeb79f0ee74c0e/diff:/var/lib/docker/overlay2/a6f94e786d222ddcfdbfc14e1aa38a83b44430727fcafa2a47968395f2594111/diff:/var/lib/docker/overlay2/f3f6a39c3e36660974945c281b08bd20e12e4a85414f7716da020ee3afb53c9f/diff:/var/lib/docker/overlay2/71cd12298612d61c3229a50fa5c0359db0057e9a96e39ad20cdb8ea48dbfe559/diff:/var/lib/docker/overlay2/8c8ae2ac0512cc12aede073c1eeaae85b89c880d2bd544194b94e3a68db3fc07/diff:/var/lib/docker/overlay2/00e5186e2ada52ba9bb84c56be91327d3cebfb4f918de062856ecdc663a06f8a/diff:/var/lib/docker/overlay2/cbca231439ca50214d25e75010827f686ff701
248205654e17439440b9b11fe9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0378edae673977c276449a87574566f39b3e7178feb64aa8573e700d57017f07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-583000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-583000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-583000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-583000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3947a0f2180836ac916ac27cc999772cb08b4096aeffcc7de4c5c0d9b03b291e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61760"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61761"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61762"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61763"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61759"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3947a0f21808",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-583000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9cf258058c71",
	                        "old-k8s-version-583000"
	                    ],
	                    "NetworkID": "2385c01da446b4577899b64cfb7c6c9559167bd3e5ad2b3a0a47d05890f119a8",
	                    "EndpointID": "4f2e701947effd2e133234f8a53a9152bf92712aec3312653e8d2e8dfb2ddc47",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 2 (409.8719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-583000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-583000 logs -n 25: (3.45301733s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-451000                                | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-451000                                | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-451000                                | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	| delete  | -p embed-certs-451000                                | embed-certs-451000           | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	| delete  | -p                                                   | disable-driver-mounts-669000 | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:02 PST |
	|         | disable-driver-mounts-669000                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:02 PST | 24 Feb 23 16:03 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:03 PST | 24 Feb 23 16:03 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:03 PST | 24 Feb 23 16:03 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-367000     | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:03 PST | 24 Feb 23 16:03 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:03 PST | 24 Feb 23 16:08 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:08 PST | 24 Feb 23 16:08 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:08 PST | 24 Feb 23 16:08 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:08 PST | 24 Feb 23 16:08 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:08 PST | 24 Feb 23 16:09 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-367000 | jenkins | v1.29.0 | 24 Feb 23 16:09 PST | 24 Feb 23 16:09 PST |
	|         | default-k8s-diff-port-367000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-192000 --memory=2200 --alsologtostderr | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:09 PST | 24 Feb 23 16:09 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-192000           | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:09 PST | 24 Feb 23 16:09 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-192000                                 | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:09 PST | 24 Feb 23 16:09 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-192000                | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:09 PST | 24 Feb 23 16:09 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-192000 --memory=2200 --alsologtostderr | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:09 PST | 24 Feb 23 16:10 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-192000 sudo                            | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:10 PST | 24 Feb 23 16:10 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-192000                                 | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:10 PST | 24 Feb 23 16:10 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-192000                                 | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:10 PST | 24 Feb 23 16:10 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-192000                                 | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:10 PST | 24 Feb 23 16:10 PST |
	| delete  | -p newest-cni-192000                                 | newest-cni-192000            | jenkins | v1.29.0 | 24 Feb 23 16:10 PST | 24 Feb 23 16:10 PST |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 16:09:50
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 16:09:50.840731   48567 out.go:296] Setting OutFile to fd 1 ...
	I0224 16:09:50.840919   48567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 16:09:50.840924   48567 out.go:309] Setting ErrFile to fd 2...
	I0224 16:09:50.840928   48567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 16:09:50.841037   48567 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 16:09:50.842474   48567 out.go:303] Setting JSON to false
	I0224 16:09:50.860801   48567 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":11364,"bootTime":1677272426,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 16:09:50.860887   48567 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 16:09:50.882464   48567 out.go:177] * [newest-cni-192000] minikube v1.29.0 on Darwin 13.2.1
	I0224 16:09:50.925372   48567 notify.go:220] Checking for updates...
	I0224 16:09:50.925404   48567 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 16:09:50.947463   48567 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 16:09:50.969487   48567 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 16:09:50.991363   48567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 16:09:51.013462   48567 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 16:09:51.035381   48567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 16:09:51.057831   48567 config.go:182] Loaded profile config "newest-cni-192000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 16:09:51.058516   48567 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 16:09:51.120782   48567 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 16:09:51.120890   48567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 16:09:51.263479   48567 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-25 00:09:51.169781271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 16:09:51.307043   48567 out.go:177] * Using the docker driver based on existing profile
	I0224 16:09:51.327867   48567 start.go:296] selected driver: docker
	I0224 16:09:51.327884   48567 start.go:857] validating driver "docker" against &{Name:newest-cni-192000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-192000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 16:09:51.327965   48567 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 16:09:51.330554   48567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 16:09:51.477156   48567 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-25 00:09:51.380777462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 16:09:51.477331   48567 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0224 16:09:51.477350   48567 cni.go:84] Creating CNI manager for ""
	I0224 16:09:51.477362   48567 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 16:09:51.477371   48567 start_flags.go:319] config:
	{Name:newest-cni-192000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 16:09:51.499431   48567 out.go:177] * Starting control plane node newest-cni-192000 in cluster newest-cni-192000
	I0224 16:09:51.521132   48567 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 16:09:51.543191   48567 out.go:177] * Pulling base image ...
	I0224 16:09:51.566399   48567 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 16:09:51.566409   48567 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 16:09:51.566499   48567 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 16:09:51.566521   48567 cache.go:57] Caching tarball of preloaded images
	I0224 16:09:51.566734   48567 preload.go:174] Found /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 16:09:51.566753   48567 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 16:09:51.567833   48567 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/config.json ...
	I0224 16:09:51.623966   48567 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0224 16:09:51.623986   48567 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0224 16:09:51.624007   48567 cache.go:193] Successfully downloaded all kic artifacts
	I0224 16:09:51.624044   48567 start.go:364] acquiring machines lock for newest-cni-192000: {Name:mk8783a5d1f75c1ccc4474a3560a67fe59dd155d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 16:09:51.624126   48567 start.go:368] acquired machines lock for "newest-cni-192000" in 63.945µs
	I0224 16:09:51.624153   48567 start.go:96] Skipping create...Using existing machine configuration
	I0224 16:09:51.624162   48567 fix.go:55] fixHost starting: 
	I0224 16:09:51.624387   48567 cli_runner.go:164] Run: docker container inspect newest-cni-192000 --format={{.State.Status}}
	I0224 16:09:51.681873   48567 fix.go:103] recreateIfNeeded on newest-cni-192000: state=Stopped err=<nil>
	W0224 16:09:51.681917   48567 fix.go:129] unexpected machine state, will restart: <nil>
	I0224 16:09:51.704003   48567 out.go:177] * Restarting existing docker container for "newest-cni-192000" ...
	I0224 16:09:51.725962   48567 cli_runner.go:164] Run: docker start newest-cni-192000
	I0224 16:09:52.071414   48567 cli_runner.go:164] Run: docker container inspect newest-cni-192000 --format={{.State.Status}}
	I0224 16:09:52.137647   48567 kic.go:426] container "newest-cni-192000" state is running.
	I0224 16:09:52.138527   48567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192000
	I0224 16:09:52.207466   48567 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/config.json ...
	I0224 16:09:52.207910   48567 machine.go:88] provisioning docker machine ...
	I0224 16:09:52.207935   48567 ubuntu.go:169] provisioning hostname "newest-cni-192000"
	I0224 16:09:52.208012   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:52.272612   48567 main.go:141] libmachine: Using SSH client type: native
	I0224 16:09:52.273023   48567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63181 <nil> <nil>}
	I0224 16:09:52.273036   48567 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-192000 && echo "newest-cni-192000" | sudo tee /etc/hostname
	I0224 16:09:52.420666   48567 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-192000
	
	I0224 16:09:52.420761   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:52.481822   48567 main.go:141] libmachine: Using SSH client type: native
	I0224 16:09:52.482184   48567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63181 <nil> <nil>}
	I0224 16:09:52.482204   48567 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-192000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-192000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-192000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 16:09:52.616320   48567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 16:09:52.616342   48567 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-26406/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-26406/.minikube}
	I0224 16:09:52.616362   48567 ubuntu.go:177] setting up certificates
	I0224 16:09:52.616370   48567 provision.go:83] configureAuth start
	I0224 16:09:52.616454   48567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192000
	I0224 16:09:52.674550   48567 provision.go:138] copyHostCerts
	I0224 16:09:52.674656   48567 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem, removing ...
	I0224 16:09:52.674666   48567 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem
	I0224 16:09:52.674768   48567 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.pem (1078 bytes)
	I0224 16:09:52.674991   48567 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem, removing ...
	I0224 16:09:52.674999   48567 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem
	I0224 16:09:52.675069   48567 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/cert.pem (1123 bytes)
	I0224 16:09:52.675218   48567 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem, removing ...
	I0224 16:09:52.675223   48567 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem
	I0224 16:09:52.675296   48567 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-26406/.minikube/key.pem (1675 bytes)
	I0224 16:09:52.675423   48567 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem org=jenkins.newest-cni-192000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-192000]
	I0224 16:09:52.759531   48567 provision.go:172] copyRemoteCerts
	I0224 16:09:52.759687   48567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 16:09:52.759898   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:52.818301   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:09:52.911552   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 16:09:52.928872   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0224 16:09:52.946333   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 16:09:52.963527   48567 provision.go:86] duration metric: configureAuth took 347.133228ms
	I0224 16:09:52.963540   48567 ubuntu.go:193] setting minikube options for container-runtime
	I0224 16:09:52.963693   48567 config.go:182] Loaded profile config "newest-cni-192000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 16:09:52.963770   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:53.021457   48567 main.go:141] libmachine: Using SSH client type: native
	I0224 16:09:53.021803   48567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63181 <nil> <nil>}
	I0224 16:09:53.021814   48567 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 16:09:53.159119   48567 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 16:09:53.159137   48567 ubuntu.go:71] root file system type: overlay
	I0224 16:09:53.159238   48567 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 16:09:53.159324   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:53.216853   48567 main.go:141] libmachine: Using SSH client type: native
	I0224 16:09:53.217208   48567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63181 <nil> <nil>}
	I0224 16:09:53.217259   48567 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 16:09:53.360863   48567 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 16:09:53.360960   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:53.418633   48567 main.go:141] libmachine: Using SSH client type: native
	I0224 16:09:53.418990   48567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 63181 <nil> <nil>}
	I0224 16:09:53.419003   48567 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 16:09:53.557708   48567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 16:09:53.557725   48567 machine.go:91] provisioned docker machine in 1.34976601s
	I0224 16:09:53.557735   48567 start.go:300] post-start starting for "newest-cni-192000" (driver="docker")
	I0224 16:09:53.557741   48567 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 16:09:53.557809   48567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 16:09:53.557864   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:53.615811   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:09:53.711966   48567 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 16:09:53.715641   48567 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 16:09:53.715656   48567 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 16:09:53.715663   48567 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 16:09:53.715667   48567 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0224 16:09:53.715679   48567 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/addons for local assets ...
	I0224 16:09:53.715767   48567 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-26406/.minikube/files for local assets ...
	I0224 16:09:53.715925   48567 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem -> 268712.pem in /etc/ssl/certs
	I0224 16:09:53.716087   48567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0224 16:09:53.723478   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /etc/ssl/certs/268712.pem (1708 bytes)
	I0224 16:09:53.740622   48567 start.go:303] post-start completed in 182.872931ms
	I0224 16:09:53.740692   48567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 16:09:53.740744   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:53.800073   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:09:53.893470   48567 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 16:09:53.898276   48567 fix.go:57] fixHost completed within 2.274044898s
	I0224 16:09:53.898292   48567 start.go:83] releasing machines lock for "newest-cni-192000", held for 2.27409032s
	I0224 16:09:53.898383   48567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192000
	I0224 16:09:53.955036   48567 ssh_runner.go:195] Run: cat /version.json
	I0224 16:09:53.955079   48567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 16:09:53.955108   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:53.955153   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:54.015779   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:09:54.015772   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:09:54.160605   48567 ssh_runner.go:195] Run: systemctl --version
	I0224 16:09:54.165489   48567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 16:09:54.170516   48567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 16:09:54.186001   48567 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 16:09:54.186067   48567 ssh_runner.go:195] Run: which cri-dockerd
	I0224 16:09:54.189962   48567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 16:09:54.197342   48567 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0224 16:09:54.210113   48567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 16:09:54.217675   48567 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0224 16:09:54.217689   48567 start.go:485] detecting cgroup driver to use...
	I0224 16:09:54.217700   48567 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 16:09:54.217770   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 16:09:54.230957   48567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0224 16:09:54.239478   48567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 16:09:54.248173   48567 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 16:09:54.248231   48567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 16:09:54.256594   48567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 16:09:54.264913   48567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 16:09:54.273280   48567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 16:09:54.281653   48567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 16:09:54.289457   48567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 16:09:54.298022   48567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 16:09:54.305256   48567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 16:09:54.312632   48567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 16:09:54.392522   48567 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 16:09:54.464297   48567 start.go:485] detecting cgroup driver to use...
	I0224 16:09:54.464328   48567 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0224 16:09:54.464388   48567 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 16:09:54.474925   48567 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0224 16:09:54.474996   48567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 16:09:54.486254   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 16:09:54.501154   48567 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 16:09:54.615370   48567 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 16:09:54.719027   48567 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 16:09:54.719046   48567 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0224 16:09:54.733072   48567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 16:09:54.817715   48567 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 16:09:55.073204   48567 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 16:09:55.154314   48567 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 16:09:55.227341   48567 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 16:09:55.298420   48567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 16:09:55.367376   48567 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 16:09:55.379053   48567 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 16:09:55.379133   48567 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 16:09:55.383250   48567 start.go:553] Will wait 60s for crictl version
	I0224 16:09:55.383295   48567 ssh_runner.go:195] Run: which crictl
	I0224 16:09:55.386990   48567 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 16:09:55.492069   48567 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0224 16:09:55.492149   48567 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 16:09:55.517967   48567 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 16:09:55.587495   48567 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0224 16:09:55.587745   48567 cli_runner.go:164] Run: docker exec -t newest-cni-192000 dig +short host.docker.internal
	I0224 16:09:55.703106   48567 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0224 16:09:55.703226   48567 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0224 16:09:55.707856   48567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 16:09:55.717901   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:55.802832   48567 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0224 16:09:55.824207   48567 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 16:09:55.824345   48567 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 16:09:55.845755   48567 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 16:09:55.862277   48567 docker.go:560] Images already preloaded, skipping extraction
	I0224 16:09:55.862366   48567 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 16:09:55.883892   48567 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 16:09:55.883915   48567 cache_images.go:84] Images are preloaded, skipping loading
	I0224 16:09:55.884039   48567 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 16:09:55.910982   48567 cni.go:84] Creating CNI manager for ""
	I0224 16:09:55.910999   48567 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 16:09:55.911018   48567 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0224 16:09:55.911032   48567 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-192000 NodeName:newest-cni-192000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0224 16:09:55.911152   48567 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-192000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 16:09:55.911239   48567 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-192000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-192000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0224 16:09:55.911306   48567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0224 16:09:55.919459   48567 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 16:09:55.919524   48567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 16:09:55.927067   48567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0224 16:09:55.939820   48567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 16:09:55.952730   48567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0224 16:09:55.965722   48567 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0224 16:09:55.969718   48567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 16:09:55.979772   48567 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000 for IP: 192.168.67.2
	I0224 16:09:55.979790   48567 certs.go:186] acquiring lock for shared ca certs: {Name:mkabe8f97a4ffa8384a45c3fc6225c6b2025baa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:09:55.979956   48567 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key
	I0224 16:09:55.980004   48567 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key
	I0224 16:09:55.980102   48567 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/client.key
	I0224 16:09:55.980167   48567 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/apiserver.key.c7fa3a9e
	I0224 16:09:55.980217   48567 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/proxy-client.key
	I0224 16:09:55.980450   48567 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem (1338 bytes)
	W0224 16:09:55.980492   48567 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871_empty.pem, impossibly tiny 0 bytes
	I0224 16:09:55.980503   48567 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 16:09:55.980541   48567 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/ca.pem (1078 bytes)
	I0224 16:09:55.980576   48567 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/cert.pem (1123 bytes)
	I0224 16:09:55.980608   48567 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/certs/key.pem (1675 bytes)
	I0224 16:09:55.980681   48567 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem (1708 bytes)
	I0224 16:09:55.981264   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0224 16:09:55.998809   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0224 16:09:56.016084   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 16:09:56.033410   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/newest-cni-192000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0224 16:09:56.050495   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 16:09:56.067893   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 16:09:56.085194   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 16:09:56.102501   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 16:09:56.119966   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/ssl/certs/268712.pem --> /usr/share/ca-certificates/268712.pem (1708 bytes)
	I0224 16:09:56.137489   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 16:09:56.155080   48567 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-26406/.minikube/certs/26871.pem --> /usr/share/ca-certificates/26871.pem (1338 bytes)
	I0224 16:09:56.172661   48567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 16:09:56.185520   48567 ssh_runner.go:195] Run: openssl version
	I0224 16:09:56.191004   48567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 16:09:56.199307   48567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 16:09:56.203352   48567 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 24 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0224 16:09:56.203397   48567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 16:09:56.208894   48567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 16:09:56.216602   48567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/26871.pem && ln -fs /usr/share/ca-certificates/26871.pem /etc/ssl/certs/26871.pem"
	I0224 16:09:56.225032   48567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/26871.pem
	I0224 16:09:56.229378   48567 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 24 22:48 /usr/share/ca-certificates/26871.pem
	I0224 16:09:56.229424   48567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/26871.pem
	I0224 16:09:56.234757   48567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/26871.pem /etc/ssl/certs/51391683.0"
	I0224 16:09:56.242332   48567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/268712.pem && ln -fs /usr/share/ca-certificates/268712.pem /etc/ssl/certs/268712.pem"
	I0224 16:09:56.250504   48567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/268712.pem
	I0224 16:09:56.254420   48567 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 24 22:48 /usr/share/ca-certificates/268712.pem
	I0224 16:09:56.254465   48567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/268712.pem
	I0224 16:09:56.259895   48567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/268712.pem /etc/ssl/certs/3ec20f2e.0"
	I0224 16:09:56.267609   48567 kubeadm.go:401] StartCluster: {Name:newest-cni-192000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-192000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 16:09:56.267721   48567 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 16:09:56.287268   48567 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 16:09:56.295234   48567 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0224 16:09:56.295255   48567 kubeadm.go:633] restartCluster start
	I0224 16:09:56.295306   48567 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0224 16:09:56.302515   48567 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:09:56.302629   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:09:56.363703   48567 kubeconfig.go:135] verify returned: extract IP: "newest-cni-192000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 16:09:56.363858   48567 kubeconfig.go:146] "newest-cni-192000" context is missing from /Users/jenkins/minikube-integration/15909-26406/kubeconfig - will repair!
	I0224 16:09:56.365453   48567 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/kubeconfig: {Name:mk1182fefc6ba3b7ea2d2356c47127703005a4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:09:56.367076   48567 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0224 16:09:56.375340   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:09:56.375409   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:09:56.384571   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:09:56.885971   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:09:56.886138   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:09:56.897822   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:09:57.384692   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:09:57.384782   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:09:57.394273   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:09:57.886732   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:09:57.886891   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:09:57.898537   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:09:58.385669   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:09:58.385786   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:09:58.396841   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:09:58.884702   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:09:58.884799   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:09:58.894467   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:09:59.385295   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:09:59.385474   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:09:59.396783   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:09:59.886259   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:09:59.886414   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:09:59.897745   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:00.384784   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:00.384890   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:00.394990   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:00.885056   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:00.885216   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:00.896259   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:01.386924   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:01.387093   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:01.398659   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:01.884888   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:01.884984   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:01.894387   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:02.385122   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:02.385259   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:02.396839   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:02.886892   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:02.887041   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:02.898446   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:03.385025   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:03.385119   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:03.394665   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:03.886922   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:03.887096   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:03.898185   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:04.385784   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:04.385891   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:04.396671   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:04.885014   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:04.885101   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:04.894378   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:05.385328   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:05.385476   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:05.397434   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:05.885455   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:05.885660   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:05.896813   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:06.385023   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:06.385135   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:06.395006   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:06.395017   48567 api_server.go:165] Checking apiserver status ...
	I0224 16:10:06.395057   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0224 16:10:06.404905   48567 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:06.404919   48567 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0224 16:10:06.404926   48567 kubeadm.go:1120] stopping kube-system containers ...
	I0224 16:10:06.405003   48567 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 16:10:06.427386   48567 docker.go:456] Stopping containers: [771a18d9e1f0 c71c7672b971 01e66f02345f 1c31ccf18672 0969fee280e2 2b5c2dea8250 8415b9b12001 edb29821766a 3df55e711454 c2759baa6dbd d209915ee2ef 956d62a349a9 0b0be8b83dcd c9a9652ca1f8 92db1bb14ac7 97da4b7c5615]
	I0224 16:10:06.427475   48567 ssh_runner.go:195] Run: docker stop 771a18d9e1f0 c71c7672b971 01e66f02345f 1c31ccf18672 0969fee280e2 2b5c2dea8250 8415b9b12001 edb29821766a 3df55e711454 c2759baa6dbd d209915ee2ef 956d62a349a9 0b0be8b83dcd c9a9652ca1f8 92db1bb14ac7 97da4b7c5615
	I0224 16:10:06.450127   48567 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0224 16:10:06.460744   48567 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 16:10:06.468476   48567 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb 25 00:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Feb 25 00:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Feb 25 00:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb 25 00:09 /etc/kubernetes/scheduler.conf
	
	I0224 16:10:06.468538   48567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 16:10:06.476093   48567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 16:10:06.483352   48567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 16:10:06.490927   48567 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:06.490983   48567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 16:10:06.498146   48567 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 16:10:06.505486   48567 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0224 16:10:06.505536   48567 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 16:10:06.512664   48567 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 16:10:06.520111   48567 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0224 16:10:06.520125   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 16:10:06.573204   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 16:10:06.941326   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0224 16:10:07.072456   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 16:10:07.135257   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0224 16:10:07.264634   48567 api_server.go:51] waiting for apiserver process to appear ...
	I0224 16:10:07.264708   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 16:10:07.779871   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 16:10:08.278730   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 16:10:08.292255   48567 api_server.go:71] duration metric: took 1.027596584s to wait for apiserver process to appear ...
	I0224 16:10:08.292281   48567 api_server.go:87] waiting for apiserver healthz status ...
	I0224 16:10:08.292296   48567 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63180/healthz ...
	I0224 16:10:08.293682   48567 api_server.go:268] stopped: https://127.0.0.1:63180/healthz: Get "https://127.0.0.1:63180/healthz": EOF
	I0224 16:10:08.793851   48567 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63180/healthz ...
	I0224 16:10:10.743136   48567 api_server.go:278] https://127.0.0.1:63180/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0224 16:10:10.743162   48567 api_server.go:102] status: https://127.0.0.1:63180/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0224 16:10:10.794398   48567 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63180/healthz ...
	I0224 16:10:10.801201   48567 api_server.go:278] https://127.0.0.1:63180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 16:10:10.801217   48567 api_server.go:102] status: https://127.0.0.1:63180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 16:10:11.294068   48567 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63180/healthz ...
	I0224 16:10:11.301003   48567 api_server.go:278] https://127.0.0.1:63180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 16:10:11.301016   48567 api_server.go:102] status: https://127.0.0.1:63180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 16:10:11.794074   48567 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63180/healthz ...
	I0224 16:10:11.799181   48567 api_server.go:278] https://127.0.0.1:63180/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0224 16:10:11.799197   48567 api_server.go:102] status: https://127.0.0.1:63180/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0224 16:10:12.293939   48567 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63180/healthz ...
	I0224 16:10:12.299130   48567 api_server.go:278] https://127.0.0.1:63180/healthz returned 200:
	ok
	I0224 16:10:12.306639   48567 api_server.go:140] control plane version: v1.26.1
	I0224 16:10:12.306655   48567 api_server.go:130] duration metric: took 4.01424576s to wait for apiserver health ...
	I0224 16:10:12.306667   48567 cni.go:84] Creating CNI manager for ""
	I0224 16:10:12.306676   48567 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 16:10:12.330101   48567 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 16:10:12.350093   48567 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 16:10:12.359268   48567 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0224 16:10:12.372976   48567 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 16:10:12.380904   48567 system_pods.go:59] 9 kube-system pods found
	I0224 16:10:12.380920   48567 system_pods.go:61] "coredns-787d4945fb-jdb9z" [047c853d-934d-4dba-800a-89af683da770] Running
	I0224 16:10:12.380926   48567 system_pods.go:61] "coredns-787d4945fb-n2tkw" [e2322ddd-8139-440c-9a48-ad9de6179d94] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 16:10:12.380939   48567 system_pods.go:61] "etcd-newest-cni-192000" [82f256a2-5fc0-445a-8e58-bf4d54bb246c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 16:10:12.380944   48567 system_pods.go:61] "kube-apiserver-newest-cni-192000" [012e7af0-b895-4f36-b678-33fe954412f1] Running
	I0224 16:10:12.380950   48567 system_pods.go:61] "kube-controller-manager-newest-cni-192000" [4ea3031f-28a2-43bb-ad2d-7247b64f1bba] Running
	I0224 16:10:12.380954   48567 system_pods.go:61] "kube-proxy-zsn6p" [b8339da0-1dc0-473c-aedf-72af67b54afc] Running
	I0224 16:10:12.380958   48567 system_pods.go:61] "kube-scheduler-newest-cni-192000" [2384c8bb-9249-4ab3-8ea2-53b4540a9421] Running
	I0224 16:10:12.380962   48567 system_pods.go:61] "metrics-server-7997d45854-gqfp5" [6f56f2cd-2770-4cf6-b751-fd44a353075f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 16:10:12.380967   48567 system_pods.go:61] "storage-provisioner" [c2d164cd-c9e8-43c6-99a1-7cc76f8cc00e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0224 16:10:12.380972   48567 system_pods.go:74] duration metric: took 7.986148ms to wait for pod list to return data ...
	I0224 16:10:12.380978   48567 node_conditions.go:102] verifying NodePressure condition ...
	I0224 16:10:12.384121   48567 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 16:10:12.384137   48567 node_conditions.go:123] node cpu capacity is 6
	I0224 16:10:12.384148   48567 node_conditions.go:105] duration metric: took 3.164991ms to run NodePressure ...
	I0224 16:10:12.384167   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0224 16:10:12.675578   48567 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 16:10:12.684467   48567 ops.go:34] apiserver oom_adj: -16
	I0224 16:10:12.684480   48567 kubeadm.go:637] restartCluster took 16.388728654s
	I0224 16:10:12.684486   48567 kubeadm.go:403] StartCluster complete in 16.416389695s
	I0224 16:10:12.684501   48567 settings.go:142] acquiring lock: {Name:mk61f6764f7c264302b01ffc8eee0ee0f10d20c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:10:12.684594   48567 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 16:10:12.685207   48567 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/kubeconfig: {Name:mk1182fefc6ba3b7ea2d2356c47127703005a4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 16:10:12.685458   48567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 16:10:12.685503   48567 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0224 16:10:12.685592   48567 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-192000"
	I0224 16:10:12.685597   48567 addons.go:65] Setting default-storageclass=true in profile "newest-cni-192000"
	I0224 16:10:12.685614   48567 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-192000"
	W0224 16:10:12.685623   48567 addons.go:236] addon storage-provisioner should already be in state true
	I0224 16:10:12.685629   48567 addons.go:65] Setting dashboard=true in profile "newest-cni-192000"
	I0224 16:10:12.685654   48567 addons.go:227] Setting addon dashboard=true in "newest-cni-192000"
	I0224 16:10:12.685657   48567 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-192000"
	W0224 16:10:12.685666   48567 addons.go:236] addon dashboard should already be in state true
	I0224 16:10:12.685685   48567 host.go:66] Checking if "newest-cni-192000" exists ...
	I0224 16:10:12.685709   48567 host.go:66] Checking if "newest-cni-192000" exists ...
	I0224 16:10:12.685690   48567 addons.go:65] Setting metrics-server=true in profile "newest-cni-192000"
	I0224 16:10:12.685748   48567 addons.go:227] Setting addon metrics-server=true in "newest-cni-192000"
	I0224 16:10:12.685749   48567 config.go:182] Loaded profile config "newest-cni-192000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	W0224 16:10:12.685760   48567 addons.go:236] addon metrics-server should already be in state true
	I0224 16:10:12.685857   48567 host.go:66] Checking if "newest-cni-192000" exists ...
	I0224 16:10:12.686011   48567 cli_runner.go:164] Run: docker container inspect newest-cni-192000 --format={{.State.Status}}
	I0224 16:10:12.686156   48567 cli_runner.go:164] Run: docker container inspect newest-cni-192000 --format={{.State.Status}}
	I0224 16:10:12.686197   48567 cli_runner.go:164] Run: docker container inspect newest-cni-192000 --format={{.State.Status}}
	I0224 16:10:12.687465   48567 cli_runner.go:164] Run: docker container inspect newest-cni-192000 --format={{.State.Status}}
	I0224 16:10:12.695223   48567 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-192000" context rescaled to 1 replicas
	I0224 16:10:12.695281   48567 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 16:10:12.719698   48567 out.go:177] * Verifying Kubernetes components...
	I0224 16:10:12.761771   48567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 16:10:12.817554   48567 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 16:10:12.855107   48567 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 16:10:12.928694   48567 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0224 16:10:12.928707   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 16:10:12.863981   48567 addons.go:227] Setting addon default-storageclass=true in "newest-cni-192000"
	I0224 16:10:12.891838   48567 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0224 16:10:12.900085   48567 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0224 16:10:12.900133   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:10:12.928885   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	W0224 16:10:12.949991   48567 addons.go:236] addon default-storageclass should already be in state true
	I0224 16:10:12.950041   48567 host.go:66] Checking if "newest-cni-192000" exists ...
	I0224 16:10:12.988180   48567 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0224 16:10:12.988858   48567 cli_runner.go:164] Run: docker container inspect newest-cni-192000 --format={{.State.Status}}
	I0224 16:10:13.009728   48567 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0224 16:10:13.009764   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0224 16:10:13.046895   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0224 16:10:13.046912   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0224 16:10:13.046981   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:10:13.046999   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:10:13.063844   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:10:13.064439   48567 api_server.go:51] waiting for apiserver process to appear ...
	I0224 16:10:13.064576   48567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 16:10:13.090595   48567 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 16:10:13.090614   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 16:10:13.090760   48567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192000
	I0224 16:10:13.097023   48567 api_server.go:71] duration metric: took 401.675239ms to wait for apiserver process to appear ...
	I0224 16:10:13.097060   48567 api_server.go:87] waiting for apiserver healthz status ...
	I0224 16:10:13.097080   48567 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63180/healthz ...
	I0224 16:10:13.107043   48567 api_server.go:278] https://127.0.0.1:63180/healthz returned 200:
	ok
	I0224 16:10:13.109664   48567 api_server.go:140] control plane version: v1.26.1
	I0224 16:10:13.109685   48567 api_server.go:130] duration metric: took 12.615664ms to wait for apiserver health ...
	I0224 16:10:13.109694   48567 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 16:10:13.120690   48567 system_pods.go:59] 9 kube-system pods found
	I0224 16:10:13.120714   48567 system_pods.go:61] "coredns-787d4945fb-jdb9z" [047c853d-934d-4dba-800a-89af683da770] Running
	I0224 16:10:13.120733   48567 system_pods.go:61] "coredns-787d4945fb-n2tkw" [e2322ddd-8139-440c-9a48-ad9de6179d94] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0224 16:10:13.120750   48567 system_pods.go:61] "etcd-newest-cni-192000" [82f256a2-5fc0-445a-8e58-bf4d54bb246c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0224 16:10:13.120760   48567 system_pods.go:61] "kube-apiserver-newest-cni-192000" [012e7af0-b895-4f36-b678-33fe954412f1] Running
	I0224 16:10:13.120775   48567 system_pods.go:61] "kube-controller-manager-newest-cni-192000" [4ea3031f-28a2-43bb-ad2d-7247b64f1bba] Running
	I0224 16:10:13.120788   48567 system_pods.go:61] "kube-proxy-zsn6p" [b8339da0-1dc0-473c-aedf-72af67b54afc] Running
	I0224 16:10:13.120798   48567 system_pods.go:61] "kube-scheduler-newest-cni-192000" [2384c8bb-9249-4ab3-8ea2-53b4540a9421] Running
	I0224 16:10:13.120811   48567 system_pods.go:61] "metrics-server-7997d45854-gqfp5" [6f56f2cd-2770-4cf6-b751-fd44a353075f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 16:10:13.120820   48567 system_pods.go:61] "storage-provisioner" [c2d164cd-c9e8-43c6-99a1-7cc76f8cc00e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0224 16:10:13.120828   48567 system_pods.go:74] duration metric: took 11.127514ms to wait for pod list to return data ...
	I0224 16:10:13.120840   48567 default_sa.go:34] waiting for default service account to be created ...
	I0224 16:10:13.146819   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:10:13.146835   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:10:13.157433   48567 default_sa.go:45] found service account: "default"
	I0224 16:10:13.157468   48567 default_sa.go:55] duration metric: took 36.617902ms for default service account to be created ...
	I0224 16:10:13.157483   48567 kubeadm.go:578] duration metric: took 462.145406ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0224 16:10:13.157503   48567 node_conditions.go:102] verifying NodePressure condition ...
	I0224 16:10:13.161403   48567 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0224 16:10:13.161442   48567 node_conditions.go:123] node cpu capacity is 6
	I0224 16:10:13.161454   48567 node_conditions.go:105] duration metric: took 3.944786ms to run NodePressure ...
	I0224 16:10:13.161465   48567 start.go:228] waiting for startup goroutines ...
	I0224 16:10:13.177769   48567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63181 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/newest-cni-192000/id_rsa Username:docker}
	I0224 16:10:13.265405   48567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 16:10:13.274947   48567 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0224 16:10:13.274960   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0224 16:10:13.282229   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0224 16:10:13.282246   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0224 16:10:13.292331   48567 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0224 16:10:13.292346   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0224 16:10:13.362958   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0224 16:10:13.362976   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0224 16:10:13.377417   48567 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 16:10:13.377433   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0224 16:10:13.378786   48567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 16:10:13.394149   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0224 16:10:13.394164   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0224 16:10:13.472780   48567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 16:10:13.481841   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0224 16:10:13.481856   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0224 16:10:13.577942   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0224 16:10:13.577968   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0224 16:10:13.669288   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0224 16:10:13.669303   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0224 16:10:13.760222   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0224 16:10:13.760237   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0224 16:10:13.783195   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0224 16:10:13.783213   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0224 16:10:13.861948   48567 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0224 16:10:13.861967   48567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0224 16:10:13.882580   48567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0224 16:10:14.659040   48567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.393550041s)
	I0224 16:10:14.659079   48567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.280229949s)
	I0224 16:10:14.659138   48567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.186298904s)
	I0224 16:10:14.659158   48567 addons.go:457] Verifying addon metrics-server=true in "newest-cni-192000"
	I0224 16:10:14.788928   48567 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-192000 addons enable metrics-server	
	
	
	I0224 16:10:14.809583   48567 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0224 16:10:14.851517   48567 addons.go:492] enable addons completed in 2.165952412s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0224 16:10:14.851542   48567 start.go:233] waiting for cluster config update ...
	I0224 16:10:14.851560   48567 start.go:242] writing updated cluster config ...
	I0224 16:10:14.851904   48567 ssh_runner.go:195] Run: rm -f paused
	I0224 16:10:14.893999   48567 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0224 16:10:14.915708   48567 out.go:177] * Done! kubectl is now configured to use "newest-cni-192000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-24 23:45:08 UTC, end at Sat 2023-02-25 00:12:11 UTC. --
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356016738Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356576667Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.356628748Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357726958Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357784747Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357825944Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357836059Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357890010Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357942974Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357965911Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357981566Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.357993788Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358105260Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358296891Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358366239Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.358900961Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.366092775Z" level=info msg="Loading containers: start."
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.444603684Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.477227417Z" level=info msg="Loading containers: done."
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.485430316Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.485498171Z" level=info msg="Daemon has completed initialization"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.506662985Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 24 23:45:11 old-k8s-version-583000 systemd[1]: Started Docker Application Container Engine.
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.510654920Z" level=info msg="API listen on [::]:2376"
	Feb 24 23:45:11 old-k8s-version-583000 dockerd[638]: time="2023-02-24T23:45:11.516789696Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-25T00:12:13Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  00:12:13 up  3:11,  0 users,  load average: 0.63, 0.68, 0.82
	Linux old-k8s-version-583000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-24 23:45:08 UTC, end at Sat 2023-02-25 00:12:13 UTC. --
	Feb 25 00:12:11 old-k8s-version-583000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 25 00:12:12 old-k8s-version-583000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Feb 25 00:12:12 old-k8s-version-583000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 25 00:12:12 old-k8s-version-583000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 25 00:12:12 old-k8s-version-583000 kubelet[33937]: I0225 00:12:12.662141   33937 server.go:410] Version: v1.16.0
	Feb 25 00:12:12 old-k8s-version-583000 kubelet[33937]: I0225 00:12:12.662477   33937 plugins.go:100] No cloud provider specified.
	Feb 25 00:12:12 old-k8s-version-583000 kubelet[33937]: I0225 00:12:12.662515   33937 server.go:773] Client rotation is on, will bootstrap in background
	Feb 25 00:12:12 old-k8s-version-583000 kubelet[33937]: I0225 00:12:12.664415   33937 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 25 00:12:12 old-k8s-version-583000 kubelet[33937]: W0225 00:12:12.667151   33937 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 25 00:12:12 old-k8s-version-583000 kubelet[33937]: W0225 00:12:12.667219   33937 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 25 00:12:12 old-k8s-version-583000 kubelet[33937]: F0225 00:12:12.667246   33937 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 25 00:12:12 old-k8s-version-583000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 25 00:12:12 old-k8s-version-583000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 25 00:12:13 old-k8s-version-583000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Feb 25 00:12:13 old-k8s-version-583000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 25 00:12:13 old-k8s-version-583000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 25 00:12:13 old-k8s-version-583000 kubelet[33951]: I0225 00:12:13.411941   33951 server.go:410] Version: v1.16.0
	Feb 25 00:12:13 old-k8s-version-583000 kubelet[33951]: I0225 00:12:13.412341   33951 plugins.go:100] No cloud provider specified.
	Feb 25 00:12:13 old-k8s-version-583000 kubelet[33951]: I0225 00:12:13.412377   33951 server.go:773] Client rotation is on, will bootstrap in background
	Feb 25 00:12:13 old-k8s-version-583000 kubelet[33951]: I0225 00:12:13.414259   33951 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 25 00:12:13 old-k8s-version-583000 kubelet[33951]: W0225 00:12:13.414944   33951 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 25 00:12:13 old-k8s-version-583000 kubelet[33951]: W0225 00:12:13.415016   33951 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 25 00:12:13 old-k8s-version-583000 kubelet[33951]: F0225 00:12:13.415043   33951 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 25 00:12:13 old-k8s-version-583000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 25 00:12:13 old-k8s-version-583000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 16:12:13.571841   48911 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 2 (397.404866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-583000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.77s)

                                                
                                    

Test pass (272/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 27.7
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.26.1/json-events 25.26
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.66
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
18 TestDownloadOnlyKic 2.01
19 TestBinaryMirror 1.64
20 TestOffline 48.93
22 TestAddons/Setup 204.5
26 TestAddons/parallel/MetricsServer 5.58
27 TestAddons/parallel/HelmTiller 14.61
29 TestAddons/parallel/CSI 59.93
30 TestAddons/parallel/Headlamp 17.38
31 TestAddons/parallel/CloudSpanner 5.55
34 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/StoppedEnableDisable 11.48
36 TestCertOptions 34.43
37 TestCertExpiration 250.69
38 TestDockerFlags 35.64
39 TestForceSystemdFlag 36.15
40 TestForceSystemdEnv 32.44
42 TestHyperKitDriverInstallOrUpdate 39.75
45 TestErrorSpam/setup 30.49
46 TestErrorSpam/start 2.29
47 TestErrorSpam/status 1.24
48 TestErrorSpam/pause 1.82
49 TestErrorSpam/unpause 1.86
50 TestErrorSpam/stop 11.52
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 46.07
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 42.23
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 10.02
62 TestFunctional/serial/CacheCmd/cache/add_local 3.28
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
66 TestFunctional/serial/CacheCmd/cache/cache_reload 3.76
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.54
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.82
70 TestFunctional/serial/ExtraConfig 45.82
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 3
73 TestFunctional/serial/LogsFileCmd 3.23
75 TestFunctional/parallel/ConfigCmd 0.44
76 TestFunctional/parallel/DashboardCmd 13.45
77 TestFunctional/parallel/DryRun 1.71
78 TestFunctional/parallel/InternationalLanguage 0.88
79 TestFunctional/parallel/StatusCmd 1.33
84 TestFunctional/parallel/AddonsCmd 0.25
85 TestFunctional/parallel/PersistentVolumeClaim 25.67
87 TestFunctional/parallel/SSHCmd 0.81
88 TestFunctional/parallel/CpCmd 2.13
89 TestFunctional/parallel/MySQL 26.57
90 TestFunctional/parallel/FileSync 0.43
91 TestFunctional/parallel/CertSync 2.71
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
99 TestFunctional/parallel/License 0.92
100 TestFunctional/parallel/Version/short 0.13
101 TestFunctional/parallel/Version/components 1.02
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
106 TestFunctional/parallel/ImageCommands/ImageBuild 5.15
107 TestFunctional/parallel/ImageCommands/Setup 3.35
108 TestFunctional/parallel/DockerEnv/bash 2.02
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.39
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.34
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.81
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.88
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.02
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.91
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.78
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.84
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.14
123 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 0.62
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.53
131 TestFunctional/parallel/ProfileCmd/profile_list 0.49
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
133 TestFunctional/parallel/MountCmd/any-port 9.5
134 TestFunctional/parallel/MountCmd/specific-port 2.31
135 TestFunctional/delete_addon-resizer_images 0.15
136 TestFunctional/delete_my-image_image 0.06
137 TestFunctional/delete_minikube_cached_images 0.06
141 TestImageBuild/serial/NormalBuild 2.19
142 TestImageBuild/serial/BuildWithBuildArg 0.96
143 TestImageBuild/serial/BuildWithDockerIgnore 0.48
144 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.41
154 TestJSONOutput/start/Command 44.96
155 TestJSONOutput/start/Audit 0
157 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/pause/Command 0.6
161 TestJSONOutput/pause/Audit 0
163 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/unpause/Command 0.6
167 TestJSONOutput/unpause/Audit 0
169 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/stop/Command 10.87
173 TestJSONOutput/stop/Audit 0
175 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
177 TestErrorJSONOutput 0.74
179 TestKicCustomNetwork/create_custom_network 30.89
180 TestKicCustomNetwork/use_default_bridge_network 31.01
181 TestKicExistingNetwork 31.43
182 TestKicCustomSubnet 30.83
183 TestKicStaticIP 31.3
184 TestMainNoArgs 0.07
185 TestMinikubeProfile 63.65
188 TestMountStart/serial/StartWithMountFirst 8.16
189 TestMountStart/serial/VerifyMountFirst 0.4
190 TestMountStart/serial/StartWithMountSecond 8.4
191 TestMountStart/serial/VerifyMountSecond 0.4
192 TestMountStart/serial/DeleteFirst 2.14
193 TestMountStart/serial/VerifyMountPostDelete 0.39
194 TestMountStart/serial/Stop 1.57
195 TestMountStart/serial/RestartStopped 6.34
196 TestMountStart/serial/VerifyMountPostStop 0.4
199 TestMultiNode/serial/FreshStart2Nodes 78.64
202 TestMultiNode/serial/AddNode 22.82
203 TestMultiNode/serial/ProfileList 0.51
204 TestMultiNode/serial/CopyFile 14.6
205 TestMultiNode/serial/StopNode 3.11
206 TestMultiNode/serial/StartAfterStop 10.34
207 TestMultiNode/serial/RestartKeepsNodes 89.51
208 TestMultiNode/serial/DeleteNode 6.18
209 TestMultiNode/serial/StopMultiNode 21.87
210 TestMultiNode/serial/RestartMultiNode 76.54
211 TestMultiNode/serial/ValidateNameConflict 33.64
215 TestPreload 194.79
217 TestScheduledStopUnix 103.59
218 TestSkaffold 78.84
220 TestInsufficientStorage 14.74
236 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 29.4
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 68.96
238 TestStoppedBinaryUpgrade/Setup 4.82
240 TestStoppedBinaryUpgrade/MinikubeLogs 3.53
242 TestPause/serial/Start 53.08
243 TestPause/serial/SecondStartNoReconfiguration 44.41
244 TestPause/serial/Pause 0.68
245 TestPause/serial/VerifyStatus 0.41
246 TestPause/serial/Unpause 0.68
247 TestPause/serial/PauseAgain 0.75
248 TestPause/serial/DeletePaused 2.61
249 TestPause/serial/VerifyDeletedResources 0.56
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
259 TestNoKubernetes/serial/StartWithK8s 30.57
260 TestNoKubernetes/serial/StartWithStopK8s 18.82
261 TestNoKubernetes/serial/Start 7.31
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
263 TestNoKubernetes/serial/ProfileList 34.38
264 TestNoKubernetes/serial/Stop 1.6
265 TestNoKubernetes/serial/StartNoArgs 5.13
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
267 TestNetworkPlugins/group/auto/Start 45.08
268 TestNetworkPlugins/group/auto/KubeletFlags 0.41
269 TestNetworkPlugins/group/auto/NetCatPod 18.2
270 TestNetworkPlugins/group/auto/DNS 0.13
271 TestNetworkPlugins/group/auto/Localhost 0.11
272 TestNetworkPlugins/group/auto/HairPin 0.12
273 TestNetworkPlugins/group/flannel/Start 58.83
274 TestNetworkPlugins/group/flannel/ControllerPod 5.01
275 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
276 TestNetworkPlugins/group/flannel/NetCatPod 13.2
277 TestNetworkPlugins/group/flannel/DNS 0.13
278 TestNetworkPlugins/group/flannel/Localhost 0.11
279 TestNetworkPlugins/group/flannel/HairPin 0.12
280 TestNetworkPlugins/group/kindnet/Start 59.97
281 TestNetworkPlugins/group/enable-default-cni/Start 47.95
282 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
283 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
284 TestNetworkPlugins/group/kindnet/NetCatPod 13.23
285 TestNetworkPlugins/group/kindnet/DNS 0.13
286 TestNetworkPlugins/group/kindnet/Localhost 0.12
287 TestNetworkPlugins/group/kindnet/HairPin 0.12
288 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
289 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.25
290 TestNetworkPlugins/group/bridge/Start 58.72
291 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
292 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
293 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
294 TestNetworkPlugins/group/kubenet/Start 61.05
295 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
296 TestNetworkPlugins/group/bridge/NetCatPod 16.3
297 TestNetworkPlugins/group/bridge/DNS 0.13
298 TestNetworkPlugins/group/bridge/Localhost 0.11
299 TestNetworkPlugins/group/bridge/HairPin 0.12
300 TestNetworkPlugins/group/kubenet/KubeletFlags 0.49
301 TestNetworkPlugins/group/kubenet/NetCatPod 16.21
302 TestNetworkPlugins/group/custom-flannel/Start 58.25
303 TestNetworkPlugins/group/kubenet/DNS 0.16
304 TestNetworkPlugins/group/kubenet/Localhost 0.14
305 TestNetworkPlugins/group/kubenet/HairPin 0.17
306 TestNetworkPlugins/group/calico/Start 74.31
307 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
308 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.21
309 TestNetworkPlugins/group/custom-flannel/DNS 0.15
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
312 TestNetworkPlugins/group/false/Start 54.86
313 TestNetworkPlugins/group/calico/ControllerPod 5.03
314 TestNetworkPlugins/group/calico/KubeletFlags 0.44
315 TestNetworkPlugins/group/calico/NetCatPod 18.22
316 TestNetworkPlugins/group/calico/DNS 0.13
317 TestNetworkPlugins/group/calico/Localhost 0.11
318 TestNetworkPlugins/group/calico/HairPin 0.12
321 TestNetworkPlugins/group/false/KubeletFlags 0.42
322 TestNetworkPlugins/group/false/NetCatPod 13.21
323 TestNetworkPlugins/group/false/DNS 0.13
324 TestNetworkPlugins/group/false/Localhost 0.11
325 TestNetworkPlugins/group/false/HairPin 0.12
327 TestStartStop/group/no-preload/serial/FirstStart 72.73
328 TestStartStop/group/no-preload/serial/DeployApp 10.27
329 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
330 TestStartStop/group/no-preload/serial/Stop 10.88
331 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
332 TestStartStop/group/no-preload/serial/SecondStart 581.66
335 TestStartStop/group/old-k8s-version/serial/Stop 1.6
336 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.38
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
339 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
341 TestStartStop/group/no-preload/serial/Pause 3.17
343 TestStartStop/group/embed-certs/serial/FirstStart 44.41
344 TestStartStop/group/embed-certs/serial/DeployApp 11.28
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.83
346 TestStartStop/group/embed-certs/serial/Stop 10.9
347 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.37
348 TestStartStop/group/embed-certs/serial/SecondStart 557.99
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
353 TestStartStop/group/embed-certs/serial/Pause 3.41
355 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.14
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.28
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 11
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.38
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 308.14
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
363 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
364 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.45
365 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.31
367 TestStartStop/group/newest-cni/serial/FirstStart 42.52
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
370 TestStartStop/group/newest-cni/serial/Stop 6.05
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.38
372 TestStartStop/group/newest-cni/serial/SecondStart 24.68
373 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
376 TestStartStop/group/newest-cni/serial/Pause 3.22
x
+
TestDownloadOnly/v1.16.0/json-events (27.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-752000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-752000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (27.69915226s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (27.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-752000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-752000: exit status 85 (287.469841ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-752000 | jenkins | v1.29.0 | 24 Feb 23 14:40 PST |          |
	|         | -p download-only-752000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 14:40:47
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 14:40:47.245716   26875 out.go:296] Setting OutFile to fd 1 ...
	I0224 14:40:47.245874   26875 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:40:47.245879   26875 out.go:309] Setting ErrFile to fd 2...
	I0224 14:40:47.245883   26875 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:40:47.245982   26875 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	W0224 14:40:47.246082   26875 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-26406/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-26406/.minikube/config/config.json: no such file or directory
	I0224 14:40:47.247619   26875 out.go:303] Setting JSON to true
	I0224 14:40:47.266036   26875 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6021,"bootTime":1677272426,"procs":396,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 14:40:47.266112   26875 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 14:40:47.288050   26875 out.go:97] [download-only-752000] minikube v1.29.0 on Darwin 13.2.1
	I0224 14:40:47.288346   26875 notify.go:220] Checking for updates...
	I0224 14:40:47.309812   26875 out.go:169] MINIKUBE_LOCATION=15909
	W0224 14:40:47.288339   26875 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball: no such file or directory
	I0224 14:40:47.353709   26875 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 14:40:47.375086   26875 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 14:40:47.397114   26875 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 14:40:47.418863   26875 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	W0224 14:40:47.460866   26875 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 14:40:47.461270   26875 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 14:40:47.523823   26875 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 14:40:47.523939   26875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 14:40:47.663811   26875 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 22:40:47.572111556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 14:40:47.685528   26875 out.go:97] Using the docker driver based on user configuration
	I0224 14:40:47.685633   26875 start.go:296] selected driver: docker
	I0224 14:40:47.685649   26875 start.go:857] validating driver "docker" against <nil>
	I0224 14:40:47.685858   26875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 14:40:47.827654   26875 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 22:40:47.736696021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 14:40:47.827784   26875 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0224 14:40:47.830248   26875 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0224 14:40:47.830398   26875 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 14:40:47.852325   26875 out.go:169] Using Docker Desktop driver with root privileges
	I0224 14:40:47.873912   26875 cni.go:84] Creating CNI manager for ""
	I0224 14:40:47.873952   26875 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0224 14:40:47.873968   26875 start_flags.go:319] config:
	{Name:download-only-752000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-752000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 14:40:47.895828   26875 out.go:97] Starting control plane node download-only-752000 in cluster download-only-752000
	I0224 14:40:47.895934   26875 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 14:40:47.918015   26875 out.go:97] Pulling base image ...
	I0224 14:40:47.918148   26875 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 14:40:47.918251   26875 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 14:40:47.972320   26875 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0224 14:40:47.972579   26875 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0224 14:40:47.972730   26875 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0224 14:40:48.060382   26875 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 14:40:48.060407   26875 cache.go:57] Caching tarball of preloaded images
	I0224 14:40:48.060655   26875 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 14:40:48.082429   26875 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0224 14:40:48.082466   26875 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:40:48.295163   26875 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0224 14:41:08.356736   26875 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:41:08.356884   26875 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:41:08.923600   26875 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0224 14:41:08.923795   26875 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/download-only-752000/config.json ...
	I0224 14:41:08.923820   26875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/download-only-752000/config.json: {Name:mk4705b65c21b6a312f578c274379041f5cf10ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 14:41:08.924069   26875 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0224 14:41:08.924335   26875 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-752000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (25.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-752000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-752000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (25.259497052s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (25.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-752000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-752000: exit status 85 (287.644638ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-752000 | jenkins | v1.29.0 | 24 Feb 23 14:40 PST |          |
	|         | -p download-only-752000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-752000 | jenkins | v1.29.0 | 24 Feb 23 14:41 PST |          |
	|         | -p download-only-752000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/24 14:41:15
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 14:41:15.238868   26935 out.go:296] Setting OutFile to fd 1 ...
	I0224 14:41:15.239049   26935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:41:15.239054   26935 out.go:309] Setting ErrFile to fd 2...
	I0224 14:41:15.239071   26935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:41:15.239216   26935 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	W0224 14:41:15.239346   26935 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-26406/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-26406/.minikube/config/config.json: no such file or directory
	I0224 14:41:15.240653   26935 out.go:303] Setting JSON to true
	I0224 14:41:15.258793   26935 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6049,"bootTime":1677272426,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 14:41:15.258869   26935 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 14:41:15.281000   26935 out.go:97] [download-only-752000] minikube v1.29.0 on Darwin 13.2.1
	I0224 14:41:15.281249   26935 notify.go:220] Checking for updates...
	I0224 14:41:15.302818   26935 out.go:169] MINIKUBE_LOCATION=15909
	I0224 14:41:15.324208   26935 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 14:41:15.346135   26935 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 14:41:15.367966   26935 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 14:41:15.390167   26935 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	W0224 14:41:15.433731   26935 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 14:41:15.434394   26935 config.go:182] Loaded profile config "download-only-752000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0224 14:41:15.434485   26935 start.go:765] api.Load failed for download-only-752000: filestore "download-only-752000": Docker machine "download-only-752000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0224 14:41:15.434568   26935 driver.go:365] Setting default libvirt URI to qemu:///system
	W0224 14:41:15.434608   26935 start.go:765] api.Load failed for download-only-752000: filestore "download-only-752000": Docker machine "download-only-752000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0224 14:41:15.494134   26935 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 14:41:15.494250   26935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 14:41:15.635511   26935 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 22:41:15.543105839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 14:41:15.657006   26935 out.go:97] Using the docker driver based on existing profile
	I0224 14:41:15.657103   26935 start.go:296] selected driver: docker
	I0224 14:41:15.657118   26935 start.go:857] validating driver "docker" against &{Name:download-only-752000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-752000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0224 14:41:15.657438   26935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 14:41:15.797430   26935 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-24 22:41:15.707131947 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 14:41:15.799995   26935 cni.go:84] Creating CNI manager for ""
	I0224 14:41:15.800017   26935 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 14:41:15.800031   26935 start_flags.go:319] config:
	{Name:download-only-752000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-752000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 14:41:15.821977   26935 out.go:97] Starting control plane node download-only-752000 in cluster download-only-752000
	I0224 14:41:15.822089   26935 cache.go:120] Beginning downloading kic base image for docker with docker
	I0224 14:41:15.843585   26935 out.go:97] Pulling base image ...
	I0224 14:41:15.843705   26935 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 14:41:15.843807   26935 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0224 14:41:15.899022   26935 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0224 14:41:15.899270   26935 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0224 14:41:15.899302   26935 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory, skipping pull
	I0224 14:41:15.899307   26935 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in cache, skipping pull
	I0224 14:41:15.899314   26935 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc as a tarball
	I0224 14:41:15.937756   26935 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 14:41:15.937785   26935 cache.go:57] Caching tarball of preloaded images
	I0224 14:41:15.938119   26935 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 14:41:15.959813   26935 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0224 14:41:15.959922   26935 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:41:16.160484   26935 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0224 14:41:35.187403   26935 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:41:35.187628   26935 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0224 14:41:35.789332   26935 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0224 14:41:35.789432   26935 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/download-only-752000/config.json ...
	I0224 14:41:35.789832   26935 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0224 14:41:35.790231   26935 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.1/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15909-26406/.minikube/cache/darwin/amd64/v1.26.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-752000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-752000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.01s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-855000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-855000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-855000
--- PASS: TestDownloadOnlyKic (2.01s)

                                                
                                    
x
+
TestBinaryMirror (1.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-116000 --alsologtostderr --binary-mirror http://127.0.0.1:56384 --driver=docker 
aaa_download_only_test.go:308: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-116000 --alsologtostderr --binary-mirror http://127.0.0.1:56384 --driver=docker : (1.025865208s)
helpers_test.go:175: Cleaning up "binary-mirror-116000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-116000
--- PASS: TestBinaryMirror (1.64s)

                                                
                                    
x
+
TestOffline (48.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-132000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-132000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (46.255172456s)
helpers_test.go:175: Cleaning up "offline-docker-132000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-132000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-132000: (2.669572276s)
--- PASS: TestOffline (48.93s)

                                                
                                    
x
+
TestAddons/Setup (204.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-821000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-821000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m24.495760616s)
--- PASS: TestAddons/Setup (204.50s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.828467ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-jk4wj" [e5723425-8f98-4033-9735-48b2744e833f] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010077863s
addons_test.go:380: (dbg) Run:  kubectl --context addons-821000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-821000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.58s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.61s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.234558ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-2jkwj" [6b05e1b9-2ae5-4bbb-b624-c7def28e83bf] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009344025s
addons_test.go:438: (dbg) Run:  kubectl --context addons-821000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-821000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.101314073s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-821000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.001404ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-821000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-821000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [20cf0c26-3e74-4a09-a920-3a5477d978fc] Pending
helpers_test.go:344: "task-pv-pod" [20cf0c26-3e74-4a09-a920-3a5477d978fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [20cf0c26-3e74-4a09-a920-3a5477d978fc] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.008938541s
addons_test.go:549: (dbg) Run:  kubectl --context addons-821000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-821000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-821000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-821000 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-821000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-821000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-821000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0743864d-9b0b-4c85-a7af-ab6f00f4afb8] Pending
helpers_test.go:344: "task-pv-pod-restore" [0743864d-9b0b-4c85-a7af-ab6f00f4afb8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0743864d-9b0b-4c85-a7af-ab6f00f4afb8] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.009539607s
addons_test.go:591: (dbg) Run:  kubectl --context addons-821000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-821000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-821000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-821000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-821000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.565516837s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-821000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-821000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-821000 --alsologtostderr -v=1: (1.367004092s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-4mm9j" [5576aef6-d37a-45a0-a1d7-1df9f8f73964] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-4mm9j" [5576aef6-d37a-45a0-a1d7-1df9f8f73964] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.013473578s
--- PASS: TestAddons/parallel/Headlamp (17.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-dnh95" [2b31ae04-c379-427a-bf41-db1d2711bc9b] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009042476s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-821000
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-821000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-821000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-821000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-821000: (11.040760895s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-821000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-821000
--- PASS: TestAddons/StoppedEnableDisable (11.48s)

                                                
                                    
x
+
TestCertOptions (34.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-066000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-066000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (30.75775078s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-066000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-066000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-066000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-066000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-066000: (2.654798175s)
--- PASS: TestCertOptions (34.43s)

                                                
                                    
x
+
TestCertExpiration (250.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-713000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-713000 --memory=2048 --cert-expiration=3m --driver=docker : (36.131899256s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-713000 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-713000 --memory=2048 --cert-expiration=8760h --driver=docker : (31.956455426s)
helpers_test.go:175: Cleaning up "cert-expiration-713000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-713000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-713000: (2.598906802s)
--- PASS: TestCertExpiration (250.69s)

                                                
                                    
x
+
TestDockerFlags (35.64s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-003000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0224 15:20:54.207259   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-003000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (31.974257697s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-003000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-003000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-003000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-003000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-003000: (2.771220405s)
--- PASS: TestDockerFlags (35.64s)

                                                
                                    
x
+
TestForceSystemdFlag (36.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-520000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-520000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (32.934401485s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-520000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-520000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-520000: (2.705273689s)
--- PASS: TestForceSystemdFlag (36.15s)

                                                
                                    
x
+
TestForceSystemdEnv (32.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-647000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
* Starting control plane node minikube in cluster minikube
* Download complete!
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-647000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (29.206943467s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-647000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-647000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-647000: (2.768195096s)
--- PASS: TestForceSystemdEnv (32.44s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (39.75s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1548771716/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1548771716/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1548771716/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
E0224 15:20:10.445709   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (39.75s)

                                                
                                    
x
+
TestErrorSpam/setup (30.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-967000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-967000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 --driver=docker : (30.487136377s)
--- PASS: TestErrorSpam/setup (30.49s)

                                                
                                    
x
+
TestErrorSpam/start (2.29s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 start --dry-run
--- PASS: TestErrorSpam/start (2.29s)

                                                
                                    
x
+
TestErrorSpam/status (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 status
--- PASS: TestErrorSpam/status (1.24s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (11.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 stop: (10.869102114s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-967000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-967000 stop
--- PASS: TestErrorSpam/stop (11.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /Users/jenkins/minikube-integration/15909-26406/.minikube/files/etc/test/nested/copy/26871/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-691000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2199: (dbg) Done: out/minikube-darwin-amd64 start -p functional-691000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (46.067342046s)
--- PASS: TestFunctional/serial/StartWithProxy (46.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-691000 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-darwin-amd64 start -p functional-691000 --alsologtostderr -v=8: (42.229347204s)
functional_test.go:657: soft start took 42.22997753s for "functional-691000" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-691000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 cache add k8s.gcr.io/pause:3.1: (3.451736041s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 cache add k8s.gcr.io/pause:3.3: (3.403685115s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 cache add k8s.gcr.io/pause:latest
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 cache add k8s.gcr.io/pause:latest: (3.164255179s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-691000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1794889211/001
functional_test.go:1083: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 cache add minikube-local-cache-test:functional-691000
functional_test.go:1083: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 cache add minikube-local-cache-test:functional-691000: (2.735915947s)
functional_test.go:1088: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 cache delete minikube-local-cache-test:functional-691000
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-691000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-691000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (392.115791ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 cache reload
functional_test.go:1152: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 cache reload: (2.53402999s)
functional_test.go:1157: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 kubectl -- --context functional-691000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-691000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.82s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-691000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0224 14:50:10.295299   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:10.303253   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:10.314075   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:10.335769   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:10.376923   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:10.459198   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:10.621382   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:10.943501   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:11.584000   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:12.864147   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:15.426272   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:20.546519   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 14:50:30.787372   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
functional_test.go:751: (dbg) Done: out/minikube-darwin-amd64 start -p functional-691000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.822613912s)
functional_test.go:755: restart took 45.82274676s for "functional-691000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-691000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 logs
functional_test.go:1230: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 logs: (2.997418392s)
--- PASS: TestFunctional/serial/LogsCmd (3.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2516457472/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd2516457472/001/logs.txt: (3.230587726s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-691000 config get cpus: exit status 14 (48.740285ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-691000 config get cpus: exit status 14 (68.799764ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-691000 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-691000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 29713: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-691000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:968: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-691000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (828.84925ms)

                                                
                                                
-- stdout --
	* [functional-691000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 14:51:58.656220   29633 out.go:296] Setting OutFile to fd 1 ...
	I0224 14:51:58.656410   29633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:51:58.656415   29633 out.go:309] Setting ErrFile to fd 2...
	I0224 14:51:58.656419   29633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:51:58.656524   29633 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 14:51:58.657819   29633 out.go:303] Setting JSON to false
	I0224 14:51:58.676180   29633 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6692,"bootTime":1677272426,"procs":385,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 14:51:58.676266   29633 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 14:51:58.698391   29633 out.go:177] * [functional-691000] minikube v1.29.0 on Darwin 13.2.1
	I0224 14:51:58.741208   29633 notify.go:220] Checking for updates...
	I0224 14:51:58.763090   29633 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 14:51:58.785021   29633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 14:51:58.806176   29633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 14:51:58.827131   29633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 14:51:58.869894   29633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 14:51:58.911718   29633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 14:51:58.934125   29633 config.go:182] Loaded profile config "functional-691000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 14:51:58.934489   29633 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 14:51:59.004512   29633 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 14:51:59.004624   29633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 14:51:59.258238   29633 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 22:51:59.101513481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 14:51:59.309731   29633 out.go:177] * Using the docker driver based on existing profile
	I0224 14:51:59.330530   29633 start.go:296] selected driver: docker
	I0224 14:51:59.330552   29633 start.go:857] validating driver "docker" against &{Name:functional-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-691000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 14:51:59.330648   29633 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 14:51:59.354502   29633 out.go:177] 
	W0224 14:51:59.375629   29633 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0224 14:51:59.396651   29633 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-691000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-691000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-691000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (879.165326ms)

                                                
                                                
-- stdout --
	* [functional-691000] minikube v1.29.0 sur Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 14:51:59.011633   29640 out.go:296] Setting OutFile to fd 1 ...
	I0224 14:51:59.011997   29640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:51:59.012006   29640 out.go:309] Setting ErrFile to fd 2...
	I0224 14:51:59.012013   29640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 14:51:59.012280   29640 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 14:51:59.014520   29640 out.go:303] Setting JSON to false
	I0224 14:51:59.035918   29640 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6693,"bootTime":1677272426,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2.1","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0224 14:51:59.036022   29640 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0224 14:51:59.057910   29640 out.go:177] * [functional-691000] minikube v1.29.0 sur Darwin 13.2.1
	I0224 14:51:59.099517   29640 notify.go:220] Checking for updates...
	I0224 14:51:59.120478   29640 out.go:177]   - MINIKUBE_LOCATION=15909
	I0224 14:51:59.141535   29640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	I0224 14:51:59.162537   29640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0224 14:51:59.183637   29640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 14:51:59.225540   29640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	I0224 14:51:59.267476   29640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 14:51:59.310541   29640 config.go:182] Loaded profile config "functional-691000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 14:51:59.311199   29640 driver.go:365] Setting default libvirt URI to qemu:///system
	I0224 14:51:59.442600   29640 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0224 14:51:59.442734   29640 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 14:51:59.638258   29640 info.go:266] docker info: {ID:IP4W:MU7T:LXXP:A5FK:AZDS:VO26:5WXJ:AUD6:DYFY:LVPL:GBAL:LED3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-24 22:51:59.496179626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 14:51:59.698531   29640 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0224 14:51:59.719649   29640 start.go:296] selected driver: docker
	I0224 14:51:59.719675   29640 start.go:857] validating driver "docker" against &{Name:functional-691000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-691000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0224 14:51:59.719785   29640 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 14:51:59.762607   29640 out.go:177] 
	W0224 14:51:59.783720   29640 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0224 14:51:59.804673   29640 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 status
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b3f44737-b2e9-4f53-aa1e-58e309bec845] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008820585s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-691000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-691000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-691000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-691000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d334ffa3-f317-4c21-ba02-c2f7b6dab192] Pending
helpers_test.go:344: "sp-pod" [d334ffa3-f317-4c21-ba02-c2f7b6dab192] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d334ffa3-f317-4c21-ba02-c2f7b6dab192] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.007735696s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-691000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-691000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-691000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [38f5ed99-185a-4617-9a8b-afa0b8803e7a] Pending
helpers_test.go:344: "sp-pod" [38f5ed99-185a-4617-9a8b-afa0b8803e7a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [38f5ed99-185a-4617-9a8b-afa0b8803e7a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006609736s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-691000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh -n functional-691000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 cp functional-691000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd4205800404/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh -n functional-691000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-691000 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-lvqfx" [29666412-033a-4e44-b066-dd8870c5863a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-lvqfx" [29666412-033a-4e44-b066-dd8870c5863a] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.013095476s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;": exit status 1 (162.537642ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;": exit status 1 (173.182645ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;": exit status 1 (162.227745ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;": exit status 1 (220.424757ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-691000 exec mysql-888f84dd9-lvqfx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/26871/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo cat /etc/test/nested/copy/26871/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/26871.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo cat /etc/ssl/certs/26871.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/26871.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo cat /usr/share/ca-certificates/26871.pem"
E0224 14:50:51.268170   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/268712.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo cat /etc/ssl/certs/268712.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/268712.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo cat /usr/share/ca-certificates/268712.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-691000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-691000 ssh "sudo systemctl is-active crio": exit status 1 (624.221326ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 version -o=json --components
functional_test.go:2235: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 version -o=json --components: (1.020225777s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-691000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-691000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-691000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-691000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-691000 | a0ccb7e2260b6 | 1.24MB |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-691000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/minikube-local-cache-test | functional-691000 | fc8a41ee93b8c | 30B    |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
|---------------------------------------------|-------------------|---------------|--------|
2023/02/24 14:52:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-691000 image ls --format json:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-691000"],"size":"32900000"},{"id":"fc8a41ee93b8c36cf7b30a996c185e69e9dc494d498ac9c7e568351d3a6291a6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-691000"],"size":"30"},{"id"
:"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":[
"docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"a0ccb7e2260b65a0091cabdf2661
dc017614d8bca852317b267dc5450803d9cd","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-691000"],"size":"1240000"},{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-691000 image ls --format yaml:
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: fc8a41ee93b8c36cf7b30a996c185e69e9dc494d498ac9c7e568351d3a6291a6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-691000
size: "30"
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-691000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-691000 ssh pgrep buildkitd: exit status 1 (421.542451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image build -t localhost/my-image:functional-691000 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 image build -t localhost/my-image:functional-691000 testdata/build: (4.349896869s)
functional_test.go:317: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-691000 image build -t localhost/my-image:functional-691000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in ba38654f23d7
Removing intermediate container ba38654f23d7
---> b4bdb278ac42
Step 3/3 : ADD content.txt /
---> a0ccb7e2260b
Successfully built a0ccb7e2260b
Successfully tagged localhost/my-image:functional-691000
functional_test.go:320: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-691000 image build -t localhost/my-image:functional-691000 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.28420597s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-691000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-691000 docker-env) && out/minikube-darwin-amd64 status -p functional-691000"
functional_test.go:493: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-691000 docker-env) && out/minikube-darwin-amd64 status -p functional-691000": (1.355287885s)
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-691000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image load --daemon gcr.io/google-containers/addon-resizer:functional-691000
functional_test.go:352: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 image load --daemon gcr.io/google-containers/addon-resizer:functional-691000: (3.501984396s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image load --daemon gcr.io/google-containers/addon-resizer:functional-691000
functional_test.go:362: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 image load --daemon gcr.io/google-containers/addon-resizer:functional-691000: (2.409965929s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.281363997s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-691000
functional_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image load --daemon gcr.io/google-containers/addon-resizer:functional-691000
functional_test.go:242: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 image load --daemon gcr.io/google-containers/addon-resizer:functional-691000: (4.25082424s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image save gcr.io/google-containers/addon-resizer:functional-691000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 image save gcr.io/google-containers/addon-resizer:functional-691000 /Users/jenkins/workspace/addon-resizer-save.tar: (1.907782823s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image rm gcr.io/google-containers/addon-resizer:functional-691000
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.458665261s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-691000
functional_test.go:421: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 image save --daemon gcr.io/google-containers/addon-resizer:functional-691000
functional_test.go:421: (dbg) Done: out/minikube-darwin-amd64 -p functional-691000 image save --daemon gcr.io/google-containers/addon-resizer:functional-691000: (2.713083265s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-691000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-691000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-691000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9fc03afe-5f53-4b87-bf9b-ccd28018a690] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9fc03afe-5f53-4b87-bf9b-ccd28018a690] Running
E0224 14:51:32.229508   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.007516715s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 service list -o json
functional_test.go:1552: Took "617.333583ms" to run "out/minikube-darwin-amd64 -p functional-691000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-691000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-691000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 29290: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1312: Took "421.355039ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1326: Took "68.302848ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1363: Took "418.084637ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1376: Took "69.883079ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-691000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port811135767/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677279106799775000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port811135767/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677279106799775000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port811135767/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677279106799775000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port811135767/001/test-1677279106799775000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-691000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (399.233338ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 24 22:51 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 24 22:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 24 22:51 test-1677279106799775000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh cat /mount-9p/test-1677279106799775000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-691000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cea6bce7-ab81-4fa2-ab3e-3f37e8ff8419] Pending
helpers_test.go:344: "busybox-mount" [cea6bce7-ab81-4fa2-ab3e-3f37e8ff8419] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cea6bce7-ab81-4fa2-ab3e-3f37e8ff8419] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cea6bce7-ab81-4fa2-ab3e-3f37e8ff8419] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.008424431s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-691000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-691000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port811135767/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-691000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port927130949/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-691000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (404.986343ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-691000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port927130949/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-691000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-691000 ssh "sudo umount -f /mount-9p": exit status 1 (376.785556ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-691000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-691000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port927130949/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-691000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-691000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-691000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-329000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-329000: (2.187474118s)
--- PASS: TestImageBuild/serial/NormalBuild (2.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-329000
E0224 14:52:54.151190   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-329000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-329000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-343000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0224 15:00:54.066869   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 15:01:21.876555   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-343000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (44.95921165s)
--- PASS: TestJSONOutput/start/Command (44.96s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-343000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-343000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-343000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-343000 --output=json --user=testUser: (10.871887956s)
--- PASS: TestJSONOutput/stop/Command (10.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.74s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-849000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-849000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (349.782207ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18e9b543-527e-43ef-b3f0-df8606cbc03b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-849000] minikube v1.29.0 on Darwin 13.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"401befe4-a75a-4745-bc11-d5219ac0bedc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"b4c8857c-9e53-4556-9d29-ddd80fefb6d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig"}}
	{"specversion":"1.0","id":"01e419ed-2871-47c3-a94b-569422cced3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"2ce304bc-5da9-451c-b935-82d95d7b7ce0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9db0a788-62dc-481f-850f-28651d7fbbe1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube"}}
	{"specversion":"1.0","id":"a906400a-d6a2-40f9-a5d4-7b143fb75a83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f0fa9ae9-85fe-4e2e-92f2-9f822febdd4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-849000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-849000
--- PASS: TestErrorJSONOutput (0.74s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-617000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-617000 --network=: (28.241816635s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-617000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-617000: (2.589126626s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.89s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-212000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-212000 --network=bridge: (28.71882206s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-212000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-212000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-212000: (2.238176798s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.01s)

                                                
                                    
x
+
TestKicExistingNetwork (31.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-182000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-182000 --network=existing-network: (28.58728186s)
helpers_test.go:175: Cleaning up "existing-network-182000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-182000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-182000: (2.476405025s)
--- PASS: TestKicExistingNetwork (31.43s)

                                                
                                    
x
+
TestKicCustomSubnet (30.83s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-743000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-743000 --subnet=192.168.60.0/24: (28.144276011s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-743000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-743000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-743000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-743000: (2.624737226s)
--- PASS: TestKicCustomSubnet (30.83s)

                                                
                                    
x
+
TestKicStaticIP (31.3s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-232000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-232000 --static-ip=192.168.200.200: (28.459655025s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-232000 ip
helpers_test.go:175: Cleaning up "static-ip-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-232000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-232000: (2.606564867s)
--- PASS: TestKicStaticIP (31.30s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (63.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-289000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-289000 --driver=docker : (28.49614s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-290000 --driver=docker 
E0224 15:05:10.430889   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-290000 --driver=docker : (28.149782611s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-289000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-290000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-290000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-290000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-290000: (2.623468832s)
helpers_test.go:175: Cleaning up "first-289000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-289000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-289000: (2.563216913s)
--- PASS: TestMinikubeProfile (63.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-857000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-857000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.157632083s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-857000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-870000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-870000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.40364371s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-870000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-857000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-857000 --alsologtostderr -v=5: (2.137390696s)
--- PASS: TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-870000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-870000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-870000: (1.566980203s)
--- PASS: TestMountStart/serial/Stop (1.57s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-870000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-870000: (5.34017518s)
--- PASS: TestMountStart/serial/RestartStopped (6.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-870000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-358000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0224 15:05:54.191839   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 15:06:33.488121   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-358000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m17.942213977s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.64s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-358000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-358000 -v 3 --alsologtostderr: (21.637310193s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr: (1.18455761s)
--- PASS: TestMultiNode/serial/AddNode (22.82s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.51s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp testdata/cp-test.txt multinode-358000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile47587914/001/cp-test_multinode-358000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000:/home/docker/cp-test.txt multinode-358000-m02:/home/docker/cp-test_multinode-358000_multinode-358000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m02 "sudo cat /home/docker/cp-test_multinode-358000_multinode-358000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000:/home/docker/cp-test.txt multinode-358000-m03:/home/docker/cp-test_multinode-358000_multinode-358000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m03 "sudo cat /home/docker/cp-test_multinode-358000_multinode-358000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp testdata/cp-test.txt multinode-358000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile47587914/001/cp-test_multinode-358000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000-m02:/home/docker/cp-test.txt multinode-358000:/home/docker/cp-test_multinode-358000-m02_multinode-358000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000 "sudo cat /home/docker/cp-test_multinode-358000-m02_multinode-358000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000-m02:/home/docker/cp-test.txt multinode-358000-m03:/home/docker/cp-test_multinode-358000-m02_multinode-358000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m03 "sudo cat /home/docker/cp-test_multinode-358000-m02_multinode-358000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp testdata/cp-test.txt multinode-358000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile47587914/001/cp-test_multinode-358000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000-m03:/home/docker/cp-test.txt multinode-358000:/home/docker/cp-test_multinode-358000-m03_multinode-358000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000 "sudo cat /home/docker/cp-test_multinode-358000-m03_multinode-358000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 cp multinode-358000-m03:/home/docker/cp-test.txt multinode-358000-m02:/home/docker/cp-test_multinode-358000-m03_multinode-358000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 ssh -n multinode-358000-m02 "sudo cat /home/docker/cp-test_multinode-358000-m03_multinode-358000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-358000 node stop m03: (1.585485893s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-358000 status: exit status 7 (772.445645ms)

                                                
                                                
-- stdout --
	multinode-358000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-358000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-358000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr: exit status 7 (755.744209ms)

                                                
                                                
-- stdout --
	multinode-358000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-358000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-358000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 15:08:07.387520   33644 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:08:07.387688   33644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:08:07.387693   33644 out.go:309] Setting ErrFile to fd 2...
	I0224 15:08:07.387697   33644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:08:07.387815   33644 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:08:07.387997   33644 out.go:303] Setting JSON to false
	I0224 15:08:07.388022   33644 mustload.go:65] Loading cluster: multinode-358000
	I0224 15:08:07.388074   33644 notify.go:220] Checking for updates...
	I0224 15:08:07.388305   33644 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:08:07.388317   33644 status.go:255] checking status of multinode-358000 ...
	I0224 15:08:07.388712   33644 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:08:07.448383   33644 status.go:330] multinode-358000 host status = "Running" (err=<nil>)
	I0224 15:08:07.448408   33644 host.go:66] Checking if "multinode-358000" exists ...
	I0224 15:08:07.448654   33644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000
	I0224 15:08:07.507333   33644 host.go:66] Checking if "multinode-358000" exists ...
	I0224 15:08:07.507580   33644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:08:07.507641   33644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:08:07.565599   33644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58094 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000/id_rsa Username:docker}
	I0224 15:08:07.659406   33644 ssh_runner.go:195] Run: systemctl --version
	I0224 15:08:07.663947   33644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:08:07.673644   33644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-358000
	I0224 15:08:07.731915   33644 kubeconfig.go:92] found "multinode-358000" server: "https://127.0.0.1:58093"
	I0224 15:08:07.731940   33644 api_server.go:165] Checking apiserver status ...
	I0224 15:08:07.731984   33644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 15:08:07.742165   33644 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1929/cgroup
	W0224 15:08:07.750640   33644 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1929/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0224 15:08:07.750699   33644 ssh_runner.go:195] Run: ls
	I0224 15:08:07.754913   33644 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58093/healthz ...
	I0224 15:08:07.759892   33644 api_server.go:278] https://127.0.0.1:58093/healthz returned 200:
	ok
	I0224 15:08:07.759904   33644 status.go:421] multinode-358000 apiserver status = Running (err=<nil>)
	I0224 15:08:07.759917   33644 status.go:257] multinode-358000 status: &{Name:multinode-358000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 15:08:07.759932   33644 status.go:255] checking status of multinode-358000-m02 ...
	I0224 15:08:07.760179   33644 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Status}}
	I0224 15:08:07.817578   33644 status.go:330] multinode-358000-m02 host status = "Running" (err=<nil>)
	I0224 15:08:07.817608   33644 host.go:66] Checking if "multinode-358000-m02" exists ...
	I0224 15:08:07.817902   33644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-358000-m02
	I0224 15:08:07.876497   33644 host.go:66] Checking if "multinode-358000-m02" exists ...
	I0224 15:08:07.876763   33644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 15:08:07.876815   33644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-358000-m02
	I0224 15:08:07.934024   33644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58163 SSHKeyPath:/Users/jenkins/minikube-integration/15909-26406/.minikube/machines/multinode-358000-m02/id_rsa Username:docker}
	I0224 15:08:08.028488   33644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 15:08:08.037964   33644 status.go:257] multinode-358000-m02 status: &{Name:multinode-358000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0224 15:08:08.037986   33644 status.go:255] checking status of multinode-358000-m03 ...
	I0224 15:08:08.038248   33644 cli_runner.go:164] Run: docker container inspect multinode-358000-m03 --format={{.State.Status}}
	I0224 15:08:08.096245   33644 status.go:330] multinode-358000-m03 host status = "Stopped" (err=<nil>)
	I0224 15:08:08.096264   33644 status.go:343] host is not running, skipping remaining checks
	I0224 15:08:08.096273   33644 status.go:257] multinode-358000-m03 status: &{Name:multinode-358000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-358000 node start m03 --alsologtostderr: (9.252499006s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-358000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-358000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-358000: (23.155154457s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-358000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-358000 --wait=true -v=8 --alsologtostderr: (1m6.257354113s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-358000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-358000 node delete m03: (5.290213121s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 stop
E0224 15:10:10.441574   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-358000 stop: (21.547696763s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-358000 status: exit status 7 (159.766754ms)

                                                
                                                
-- stdout --
	multinode-358000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-358000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr: exit status 7 (159.836305ms)

                                                
                                                
-- stdout --
	multinode-358000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-358000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 15:10:15.892350   34193 out.go:296] Setting OutFile to fd 1 ...
	I0224 15:10:15.892520   34193 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:10:15.892524   34193 out.go:309] Setting ErrFile to fd 2...
	I0224 15:10:15.892529   34193 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0224 15:10:15.892640   34193 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-26406/.minikube/bin
	I0224 15:10:15.892840   34193 out.go:303] Setting JSON to false
	I0224 15:10:15.892864   34193 mustload.go:65] Loading cluster: multinode-358000
	I0224 15:10:15.892899   34193 notify.go:220] Checking for updates...
	I0224 15:10:15.893151   34193 config.go:182] Loaded profile config "multinode-358000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0224 15:10:15.893165   34193 status.go:255] checking status of multinode-358000 ...
	I0224 15:10:15.893567   34193 cli_runner.go:164] Run: docker container inspect multinode-358000 --format={{.State.Status}}
	I0224 15:10:15.950017   34193 status.go:330] multinode-358000 host status = "Stopped" (err=<nil>)
	I0224 15:10:15.950033   34193 status.go:343] host is not running, skipping remaining checks
	I0224 15:10:15.950039   34193 status.go:257] multinode-358000 status: &{Name:multinode-358000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 15:10:15.950067   34193 status.go:255] checking status of multinode-358000-m02 ...
	I0224 15:10:15.950339   34193 cli_runner.go:164] Run: docker container inspect multinode-358000-m02 --format={{.State.Status}}
	I0224 15:10:16.006113   34193 status.go:330] multinode-358000-m02 host status = "Stopped" (err=<nil>)
	I0224 15:10:16.006144   34193 status.go:343] host is not running, skipping remaining checks
	I0224 15:10:16.006156   34193 status.go:257] multinode-358000-m02 status: &{Name:multinode-358000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-358000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0224 15:10:54.201093   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-358000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m15.652508073s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-358000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-358000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-358000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-358000-m02 --driver=docker : exit status 14 (410.834386ms)

                                                
                                                
-- stdout --
	* [multinode-358000-m02] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-358000-m02' is duplicated with machine name 'multinode-358000-m02' in profile 'multinode-358000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-358000-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-358000-m03 --driver=docker : (30.085133634s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-358000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-358000: exit status 80 (506.05086ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-358000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-358000-m03 already exists in multinode-358000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-358000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-358000-m03: (2.594096208s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.64s)

                                                
                                    
x
+
TestPreload (194.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-526000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0224 15:12:17.256524   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-526000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m26.732475456s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-526000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-526000 -- docker pull gcr.io/k8s-minikube/busybox: (13.136120429s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-526000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-526000: (10.84666104s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-526000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0224 15:15:10.448888   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-526000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m20.933347711s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-526000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-526000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-526000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-526000: (2.722688889s)
--- PASS: TestPreload (194.79s)

                                                
                                    
x
+
TestScheduledStopUnix (103.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-904000 --memory=2048 --driver=docker 
E0224 15:15:54.209875   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-904000 --memory=2048 --driver=docker : (29.429147851s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-904000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-904000 -n scheduled-stop-904000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-904000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-904000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-904000 -n scheduled-stop-904000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-904000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-904000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-904000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-904000: exit status 7 (107.596068ms)

                                                
                                                
-- stdout --
	scheduled-stop-904000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-904000 -n scheduled-stop-904000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-904000 -n scheduled-stop-904000: exit status 7 (102.575161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-904000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-904000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-904000: (2.310191477s)
--- PASS: TestScheduledStopUnix (103.59s)

                                                
                                    
x
+
TestSkaffold (78.84s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3584724031 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-575000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-575000 --memory=2600 --driver=docker : (33.181784141s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3584724031 run --minikube-profile skaffold-575000 --kube-context skaffold-575000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3584724031 run --minikube-profile skaffold-575000 --kube-context skaffold-575000 --status-check=true --port-forward=false --interactive=false: (19.327030743s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7dcd7b6d9c-bmqnj" [c9fa8f57-2ca8-43b8-b9ee-58b6061873cc] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014627349s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7b9df777b7-74gzn" [5e17a45d-7935-4e53-aed0-56fa83e1058c] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007114136s
helpers_test.go:175: Cleaning up "skaffold-575000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-575000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-575000: (2.867868537s)
--- PASS: TestSkaffold (78.84s)

                                                
                                    
x
+
TestInsufficientStorage (14.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-607000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-607000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.588111175s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db8c3950-8cac-4190-ab93-71f9ce16afa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-607000] minikube v1.29.0 on Darwin 13.2.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a5a6174-2048-45a8-af0c-cee8023f148f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"701952ba-df63-43ba-9eae-46564f6b11da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig"}}
	{"specversion":"1.0","id":"40f6224d-dbcd-4e28-a598-fbeef84b1fa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"774e7df7-2603-4e16-a714-4454ce98caae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d6366e6-5b05-4fad-aa0a-c1a09ade60c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube"}}
	{"specversion":"1.0","id":"7ae901b6-be9a-4354-aaba-1149c7fc6270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"02a0614d-5005-481e-831a-98231a940eb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b3c4110f-f4ce-412e-bada-6d6c5852f432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2fe9a301-c40f-4205-9309-5047397be763","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f181cb49-3a54-449b-a9b4-72984b4cff33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"2fceb18b-ab22-43bf-948a-b3c41c2d65ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-607000 in cluster insufficient-storage-607000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c4480f0-9d57-4049-85a7-2e3cc58ff112","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a04548d2-a8ac-416a-836b-67e680569c66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a267cf8-8af7-4d1d-9334-594257c30ca4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-607000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-607000 --output=json --layout=cluster: exit status 7 (388.351035ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-607000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-607000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:18:44.184918   36149 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-607000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-607000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-607000 --output=json --layout=cluster: exit status 7 (389.474727ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-607000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-607000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 15:18:44.575153   36159 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-607000" does not appear in /Users/jenkins/minikube-integration/15909-26406/kubeconfig
	E0224 15:18:44.584087   36159 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/insufficient-storage-607000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-607000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-607000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-607000: (2.372951852s)
--- PASS: TestInsufficientStorage (14.74s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (29.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1587955890/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1587955890/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1587955890/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1587955890/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (29.40s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (68.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1548771716/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (68.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-426000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-426000: (3.530741181s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.53s)

                                                
                                    
x
+
TestPause/serial/Start (53.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-829000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-829000 --memory=2048 --install-addons=false --wait=all --driver=docker : (53.083618237s)
--- PASS: TestPause/serial/Start (53.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-829000 --alsologtostderr -v=1 --driver=docker 
E0224 15:28:19.343016   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-829000 --alsologtostderr -v=1 --driver=docker : (44.394362402s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.41s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-829000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-829000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-829000 --output=json --layout=cluster: exit status 2 (407.430939ms)

                                                
                                                
-- stdout --
	{"Name":"pause-829000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-829000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-829000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-829000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-829000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-829000 --alsologtostderr -v=5: (2.609286029s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-829000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-829000: exit status 1 (54.249072ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-829000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-447000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-447000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (392.738749ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-447000] minikube v1.29.0 on Darwin 13.2.1
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-26406/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-26406/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-447000 --driver=docker 
E0224 15:28:47.032879   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:28:57.274330   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-447000 --driver=docker : (30.053872079s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-447000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-447000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-447000 --no-kubernetes --driver=docker : (15.90222807s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-447000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-447000 status -o json: exit status 2 (403.337876ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-447000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-447000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-447000: (2.512373655s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-447000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-447000 --no-kubernetes --driver=docker : (7.313170431s)
--- PASS: TestNoKubernetes/serial/Start (7.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-447000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-447000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (376.50924ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (18.915197723s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (15.463851375s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-447000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-447000: (1.603081724s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-447000 --driver=docker 
E0224 15:30:10.463167   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-447000 --driver=docker : (5.134460614s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-447000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-447000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (377.636669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E0224 15:30:54.225054   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (45.078460966s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (18.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-mxbsc" [b978556b-512b-4e65-8287-023f80cf2ba9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-mxbsc" [b978556b-512b-4e65-8287-023f80cf2ba9] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 18.009331174s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (18.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (58.828417087s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nbjvs" [200fc526-cdd5-4a90-83f1-86f7d3c3a284] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.012797188s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-59djw" [d3b2dbdd-9ac7-4c1f-8fd1-b6cda445f3df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-59djw" [d3b2dbdd-9ac7-4c1f-8fd1-b6cda445f3df] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.008339355s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (59.967487971s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (47.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (47.953301339s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (47.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f76p7" [848bd050-9254-4ff5-9d05-63480045130e] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019795323s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6q5l5" [49bbd6c9-0e82-4fb4-807b-7efa05a5511c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-6q5l5" [49bbd6c9-0e82-4fb4-807b-7efa05a5511c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.01229407s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-qlplj" [0b6d0645-edac-4d3e-945b-8eaa910379c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-qlplj" [0b6d0645-edac-4d3e-945b-8eaa910379c2] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.009408264s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (58.720151047s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (61.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0224 15:35:54.178865   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 15:36:01.136914   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:01.142834   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:01.153495   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:01.173641   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:01.213869   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:01.294208   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:01.455631   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:01.776167   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:02.417117   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:03.697695   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:36:06.258187   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (1m1.048638899s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (61.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (16.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dlq2g" [d2d985fd-bf60-426f-82da-e4cf4b4c18c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0224 15:36:11.378351   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-dlq2g" [d2d985fd-bf60-426f-82da-e4cf4b4c18c7] Running
E0224 15:36:21.618975   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 16.009087234s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (16.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (16.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-wf526" [08e44b7f-0326-45de-8d66-81080913a792] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0224 15:36:42.099444   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-wf526" [08e44b7f-0326-45de-8d66-81080913a792] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 16.009490366s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (16.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (58.250692968s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0224 15:37:23.060089   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:37:41.408395   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:41.413508   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:41.425245   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:41.446315   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:41.486482   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:41.566620   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:41.727606   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:42.047792   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:42.688873   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:43.969642   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m14.314348312s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-775n9" [86e3e04f-b8bc-4c77-a4dd-95159b200fd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0224 15:37:46.529785   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:37:51.649979   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-775n9" [86e3e04f-b8bc-4c77-a4dd-95159b200fd5] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.01451747s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (54.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-416000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (54.855573886s)
--- PASS: TestNetworkPlugins/group/false/Start (54.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8cl6j" [a0d99930-06b3-4e8e-ad53-e76d324b75b7] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.029815647s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (18.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-s6kh4" [e196c187-0d46-4bbe-95e0-cbd0214eeb66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0224 15:38:44.980982   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-s6kh4" [e196c187-0d46-4bbe-95e0-cbd0214eeb66] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 18.011655952s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (18.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-416000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-416000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-pdd8k" [0936fa44-7153-4c89-834c-002a43969c79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0224 15:39:24.734272   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:24.740413   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:24.750614   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:24.770729   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:24.810940   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:24.891121   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:25.051226   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:25.409320   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:26.049662   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:39:27.329846   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-pdd8k" [0936fa44-7153-4c89-834c-002a43969c79] Running
E0224 15:39:29.890246   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.008087626s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-416000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-416000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-540000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0224 15:39:57.694019   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:40:00.254734   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:40:05.376003   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:40:05.755727   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:40:10.422450   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/addons-821000/client.crt: no such file or directory
E0224 15:40:15.618246   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:40:25.255009   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:40:36.098575   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:40:46.716791   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:40:54.181937   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
E0224 15:41:01.140330   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
E0224 15:41:07.513256   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:07.519085   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:07.529387   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:07.551510   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:07.592502   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:07.672643   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:07.832874   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:08.153136   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:08.795354   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:10.075540   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-540000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (1m12.731916441s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-540000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4e9dfe6c-7d49-4e73-aef1-dadc88952150] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0224 15:41:12.681904   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4e9dfe6c-7d49-4e73-aef1-dadc88952150] Running
E0224 15:41:17.059552   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:41:17.803710   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.011852055s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-540000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-540000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-540000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-540000 --alsologtostderr -v=3
E0224 15:41:28.045237   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:28.822796   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-540000 --alsologtostderr -v=3: (10.882152273s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-540000 -n no-preload-540000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-540000 -n no-preload-540000: exit status 7 (103.710702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-540000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (581.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-540000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0224 15:41:35.282131   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:35.288665   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:35.298929   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:35.319553   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:35.359655   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:35.441823   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:35.602624   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:35.923227   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:36.563539   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:37.844059   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:40.405654   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:45.525947   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:41:48.525663   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:41:55.766518   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:42:08.637900   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kindnet-416000/client.crt: no such file or directory
E0224 15:42:16.247057   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:42:29.487066   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/bridge-416000/client.crt: no such file or directory
E0224 15:42:38.980903   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/enable-default-cni-416000/client.crt: no such file or directory
E0224 15:42:41.410964   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:42:46.306217   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:46.311333   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:46.322370   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:46.342490   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:46.382955   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:46.464122   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:46.626337   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:46.946637   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:47.588907   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:48.870337   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:51.431113   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:56.551519   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:42:57.207601   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
E0224 15:43:06.792493   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:43:09.098715   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:43:19.305536   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
E0224 15:43:27.274692   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-540000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (9m41.243250719s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-540000 -n no-preload-540000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (581.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-583000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-583000 --alsologtostderr -v=3: (1.599156239s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-583000 -n old-k8s-version-583000: exit status 7 (105.025966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-583000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-tl55w" [4dd5e025-829e-4078-b4af-61cff7dcbb76] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0137266s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-tl55w" [4dd5e025-829e-4078-b4af-61cff7dcbb76] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008559551s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-540000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-540000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-540000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-540000 -n no-preload-540000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-540000 -n no-preload-540000: exit status 2 (411.789718ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-540000 -n no-preload-540000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-540000 -n no-preload-540000: exit status 2 (412.191176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-540000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-540000 -n no-preload-540000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-540000 -n no-preload-540000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-451000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0224 15:51:35.286730   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/kubenet-416000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-451000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (44.406961152s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-451000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [17184c35-34b1-4b16-a037-b2b2c61d5292] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [17184c35-34b1-4b16-a037-b2b2c61d5292] Running
E0224 15:52:24.189090   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/auto-416000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.013272899s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-451000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-451000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-451000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-451000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-451000 --alsologtostderr -v=3: (10.897228516s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-451000 -n embed-certs-451000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-451000 -n embed-certs-451000: exit status 7 (104.665532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-451000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (557.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-451000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0224 15:52:41.415791   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/flannel-416000/client.crt: no such file or directory
E0224 15:52:46.311613   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/custom-flannel-416000/client.crt: no such file or directory
E0224 15:53:19.312144   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-451000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (9m17.56480286s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-451000 -n embed-certs-451000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (557.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-xwclk" [2b4dd7c4-6168-43c2-ba68-4fc9cf1ca16c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013341414s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-xwclk" [2b4dd7c4-6168-43c2-ba68-4fc9cf1ca16c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007976982s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-451000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-451000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-451000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-451000 -n embed-certs-451000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-451000 -n embed-certs-451000: exit status 2 (416.914897ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-451000 -n embed-certs-451000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-451000 -n embed-certs-451000: exit status 2 (458.872941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-451000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-451000 -n embed-certs-451000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-451000 -n embed-certs-451000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0224 16:02:17.337727   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/functional-691000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (46.143399491s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-367000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [51a05fe9-9363-4de1-84d8-f2ea5461b878] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [51a05fe9-9363-4de1-84d8-f2ea5461b878] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 15.015239827s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-367000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-367000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-367000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-367000 --alsologtostderr -v=3
E0224 16:03:19.408830   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/skaffold-575000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-367000 --alsologtostderr -v=3: (10.994905926s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 7 (105.46515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-367000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (308.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0224 16:03:30.453196   26871 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-26406/.minikube/profiles/calico-416000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-367000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (5m7.719112732s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (308.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-4zqx8" [29a4090e-02ba-4acc-bea1-4388b47d14e1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-4zqx8" [29a4090e-02ba-4acc-bea1-4388b47d14e1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.013082498s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-4zqx8" [29a4090e-02ba-4acc-bea1-4388b47d14e1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009673374s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-367000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-367000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-367000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 2 (476.020138ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000: exit status 2 (419.101048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-367000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-367000 -n default-k8s-diff-port-367000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-192000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-192000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (42.523694259s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-192000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-192000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-192000 --alsologtostderr -v=3: (6.053371745s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-192000 -n newest-cni-192000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-192000 -n newest-cni-192000: exit status 7 (105.025631ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-192000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-192000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-192000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (24.193492234s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-192000 -n newest-cni-192000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-192000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-192000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-192000 -n newest-cni-192000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-192000 -n newest-cni-192000: exit status 2 (415.822385ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-192000 -n newest-cni-192000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-192000 -n newest-cni-192000: exit status 2 (421.115526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-192000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-192000 -n newest-cni-192000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-192000 -n newest-cni-192000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 11.832717ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rhhqc" [b68105b8-2637-410c-9b57-852e3f271db2] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010268327s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7rwjl" [e708e69c-a670-4e38-ab99-ea2a42997379] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011730951s
addons_test.go:305: (dbg) Run:  kubectl --context addons-821000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-821000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-821000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.466650032s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (16.58s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-821000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-821000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-821000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cfa16853-8dda-4eaa-9c4a-fc617a44381f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cfa16853-8dda-4eaa-9c4a-fc617a44381f] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00801216s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-821000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.23s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-691000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-691000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-scwhz" [81992472-23af-41bf-84da-1365a2e8a00d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-scwhz" [81992472-23af-41bf-84da-1365a2e8a00d] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00841203s
functional_test.go:1614: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-416000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-416000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-416000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-416000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-416000"

                                                
                                                
----------------------- debugLogs end: cilium-416000 [took: 5.41620753s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-416000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-416000
--- SKIP: TestNetworkPlugins/group/cilium (5.93s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-669000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-669000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
Copied to clipboard