Test Report: Docker_macOS 15770

                    
                      c18687863e947329a019937a2709fbcc4c6cf8b9:2023-02-03:27723
                    
                

Test fail (14/306)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (256.1s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-802000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0203 14:18:36.845852    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:20:53.003593    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:21:10.662032    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.667676    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.678927    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.700477    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.741673    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.823119    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:10.985294    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:11.305431    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:11.945649    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:13.228081    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:15.789344    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:20.699065    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:21:20.912120    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:31.153857    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:21:51.636900    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-802000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m16.063963872s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-802000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-802000 in cluster ingress-addon-legacy-802000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 14:18:09.423507    5571 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:18:09.423660    5571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:18:09.423666    5571 out.go:309] Setting ErrFile to fd 2...
	I0203 14:18:09.423670    5571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:18:09.423776    5571 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:18:09.424325    5571 out.go:303] Setting JSON to false
	I0203 14:18:09.442577    5571 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1064,"bootTime":1675461625,"procs":378,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 14:18:09.442672    5571 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 14:18:09.464909    5571 out.go:177] * [ingress-addon-legacy-802000] minikube v1.29.0 on Darwin 13.2
	I0203 14:18:09.524870    5571 notify.go:220] Checking for updates...
	I0203 14:18:09.546426    5571 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 14:18:09.567616    5571 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 14:18:09.589587    5571 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 14:18:09.610706    5571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 14:18:09.632787    5571 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 14:18:09.654788    5571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 14:18:09.676908    5571 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 14:18:09.741925    5571 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 14:18:09.742051    5571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:18:09.881469    5571 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-03 22:18:09.790770114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:18:09.925202    5571 out.go:177] * Using the docker driver based on user configuration
	I0203 14:18:09.947205    5571 start.go:296] selected driver: docker
	I0203 14:18:09.947232    5571 start.go:857] validating driver "docker" against <nil>
	I0203 14:18:09.947258    5571 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 14:18:09.951114    5571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:18:10.092697    5571 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-03 22:18:09.999825183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:18:10.092823    5571 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0203 14:18:10.093003    5571 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 14:18:10.114414    5571 out.go:177] * Using Docker Desktop driver with root privileges
	I0203 14:18:10.136449    5571 cni.go:84] Creating CNI manager for ""
	I0203 14:18:10.136486    5571 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 14:18:10.136498    5571 start_flags.go:319] config:
	{Name:ingress-addon-legacy-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:18:10.158163    5571 out.go:177] * Starting control plane node ingress-addon-legacy-802000 in cluster ingress-addon-legacy-802000
	I0203 14:18:10.180315    5571 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 14:18:10.202664    5571 out.go:177] * Pulling base image ...
	I0203 14:18:10.246646    5571 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0203 14:18:10.246708    5571 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 14:18:10.298680    5571 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0203 14:18:10.298710    5571 cache.go:57] Caching tarball of preloaded images
	I0203 14:18:10.298978    5571 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0203 14:18:10.326769    5571 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0203 14:18:10.368249    5571 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0203 14:18:10.370813    5571 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 14:18:10.370832    5571 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 14:18:10.449232    5571 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0203 14:18:14.926735    5571 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0203 14:18:14.926895    5571 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0203 14:18:15.544927    5571 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0203 14:18:15.545201    5571 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/config.json ...
	I0203 14:18:15.545226    5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/config.json: {Name:mk1d19ec64aab48957c1893a621acdaa55ff6817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:18:15.545550    5571 cache.go:193] Successfully downloaded all kic artifacts
	I0203 14:18:15.545576    5571 start.go:364] acquiring machines lock for ingress-addon-legacy-802000: {Name:mkae2c167f7c614411367460fc6d96a043b50f3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 14:18:15.545728    5571 start.go:368] acquired machines lock for "ingress-addon-legacy-802000" in 145.343µs
	I0203 14:18:15.545749    5571 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-802000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 14:18:15.545859    5571 start.go:125] createHost starting for "" (driver="docker")
	I0203 14:18:15.568078    5571 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0203 14:18:15.568388    5571 start.go:159] libmachine.API.Create for "ingress-addon-legacy-802000" (driver="docker")
	I0203 14:18:15.568460    5571 client.go:168] LocalClient.Create starting
	I0203 14:18:15.568656    5571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem
	I0203 14:18:15.568746    5571 main.go:141] libmachine: Decoding PEM data...
	I0203 14:18:15.568778    5571 main.go:141] libmachine: Parsing certificate...
	I0203 14:18:15.568870    5571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem
	I0203 14:18:15.568941    5571 main.go:141] libmachine: Decoding PEM data...
	I0203 14:18:15.568962    5571 main.go:141] libmachine: Parsing certificate...
	I0203 14:18:15.589554    5571 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-802000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0203 14:18:15.647039    5571 cli_runner.go:211] docker network inspect ingress-addon-legacy-802000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0203 14:18:15.647154    5571 network_create.go:281] running [docker network inspect ingress-addon-legacy-802000] to gather additional debugging logs...
	I0203 14:18:15.647172    5571 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-802000
	W0203 14:18:15.702198    5571 cli_runner.go:211] docker network inspect ingress-addon-legacy-802000 returned with exit code 1
	I0203 14:18:15.702228    5571 network_create.go:284] error running [docker network inspect ingress-addon-legacy-802000]: docker network inspect ingress-addon-legacy-802000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-802000
	I0203 14:18:15.702250    5571 network_create.go:286] output of [docker network inspect ingress-addon-legacy-802000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-802000
	
	** /stderr **
	I0203 14:18:15.702345    5571 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 14:18:15.755975    5571 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00044d5a0}
	I0203 14:18:15.756012    5571 network_create.go:123] attempt to create docker network ingress-addon-legacy-802000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0203 14:18:15.756094    5571 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-802000 ingress-addon-legacy-802000
	I0203 14:18:15.843957    5571 network_create.go:107] docker network ingress-addon-legacy-802000 192.168.49.0/24 created
	I0203 14:18:15.843993    5571 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-802000" container
	I0203 14:18:15.844122    5571 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0203 14:18:15.897810    5571 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-802000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-802000 --label created_by.minikube.sigs.k8s.io=true
	I0203 14:18:15.951736    5571 oci.go:103] Successfully created a docker volume ingress-addon-legacy-802000
	I0203 14:18:15.951882    5571 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-802000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-802000 --entrypoint /usr/bin/test -v ingress-addon-legacy-802000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -d /var/lib
	I0203 14:18:16.386462    5571 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-802000
	I0203 14:18:16.386500    5571 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0203 14:18:16.386516    5571 kic.go:190] Starting extracting preloaded images to volume ...
	I0203 14:18:16.386632    5571 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-802000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0203 14:18:22.580800    5571 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-802000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.193929274s)
	I0203 14:18:22.580825    5571 kic.go:199] duration metric: took 6.194139 seconds to extract preloaded images to volume
	I0203 14:18:22.580942    5571 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0203 14:18:22.721119    5571 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-802000 --name ingress-addon-legacy-802000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-802000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-802000 --network ingress-addon-legacy-802000 --ip 192.168.49.2 --volume ingress-addon-legacy-802000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8
	I0203 14:18:23.071913    5571 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Running}}
	I0203 14:18:23.132577    5571 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Status}}
	I0203 14:18:23.194306    5571 cli_runner.go:164] Run: docker exec ingress-addon-legacy-802000 stat /var/lib/dpkg/alternatives/iptables
	I0203 14:18:23.310391    5571 oci.go:144] the created container "ingress-addon-legacy-802000" has a running status.
	I0203 14:18:23.310431    5571 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa...
	I0203 14:18:23.356063    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0203 14:18:23.356141    5571 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0203 14:18:23.463561    5571 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Status}}
	I0203 14:18:23.524699    5571 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0203 14:18:23.524720    5571 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-802000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0203 14:18:23.630861    5571 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Status}}
	I0203 14:18:23.686121    5571 machine.go:88] provisioning docker machine ...
	I0203 14:18:23.686156    5571 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-802000"
	I0203 14:18:23.686255    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:23.744484    5571 main.go:141] libmachine: Using SSH client type: native
	I0203 14:18:23.744685    5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50695 <nil> <nil>}
	I0203 14:18:23.744700    5571 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-802000 && echo "ingress-addon-legacy-802000" | sudo tee /etc/hostname
	I0203 14:18:23.884102    5571 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-802000
	
	I0203 14:18:23.884197    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:23.942156    5571 main.go:141] libmachine: Using SSH client type: native
	I0203 14:18:23.942328    5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50695 <nil> <nil>}
	I0203 14:18:23.942343    5571 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-802000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-802000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-802000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 14:18:24.071843    5571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 14:18:24.071864    5571 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
	I0203 14:18:24.071883    5571 ubuntu.go:177] setting up certificates
	I0203 14:18:24.071898    5571 provision.go:83] configureAuth start
	I0203 14:18:24.071974    5571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-802000
	I0203 14:18:24.128740    5571 provision.go:138] copyHostCerts
	I0203 14:18:24.128785    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 14:18:24.128839    5571 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
	I0203 14:18:24.128846    5571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 14:18:24.128975    5571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
	I0203 14:18:24.129138    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 14:18:24.129172    5571 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
	I0203 14:18:24.129177    5571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 14:18:24.129249    5571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
	I0203 14:18:24.129364    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 14:18:24.129406    5571 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
	I0203 14:18:24.129410    5571 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 14:18:24.129472    5571 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
	I0203 14:18:24.129594    5571 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-802000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-802000]
	I0203 14:18:24.418431    5571 provision.go:172] copyRemoteCerts
	I0203 14:18:24.418488    5571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 14:18:24.418538    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:24.477732    5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
	I0203 14:18:24.569965    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0203 14:18:24.570051    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 14:18:24.587504    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0203 14:18:24.587596    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0203 14:18:24.604489    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0203 14:18:24.604567    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 14:18:24.621493    5571 provision.go:86] duration metric: configureAuth took 549.567212ms
	I0203 14:18:24.621507    5571 ubuntu.go:193] setting minikube options for container-runtime
	I0203 14:18:24.621654    5571 config.go:180] Loaded profile config "ingress-addon-legacy-802000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0203 14:18:24.621711    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:24.678288    5571 main.go:141] libmachine: Using SSH client type: native
	I0203 14:18:24.678463    5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50695 <nil> <nil>}
	I0203 14:18:24.678478    5571 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 14:18:24.807691    5571 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 14:18:24.807704    5571 ubuntu.go:71] root file system type: overlay
	I0203 14:18:24.807850    5571 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 14:18:24.807940    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:24.864861    5571 main.go:141] libmachine: Using SSH client type: native
	I0203 14:18:24.865027    5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50695 <nil> <nil>}
	I0203 14:18:24.865079    5571 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 14:18:24.999973    5571 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 14:18:25.000098    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:25.058151    5571 main.go:141] libmachine: Using SSH client type: native
	I0203 14:18:25.058313    5571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50695 <nil> <nil>}
	I0203 14:18:25.058326    5571 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 14:18:25.643207    5571 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:18:24.998107984 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0203 14:18:25.643238    5571 machine.go:91] provisioned docker machine in 1.957040904s
	I0203 14:18:25.643245    5571 client.go:171] LocalClient.Create took 10.074508294s
	I0203 14:18:25.643263    5571 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-802000" took 10.074611027s
	I0203 14:18:25.643275    5571 start.go:300] post-start starting for "ingress-addon-legacy-802000" (driver="docker")
	I0203 14:18:25.643282    5571 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 14:18:25.643360    5571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 14:18:25.643414    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:25.699986    5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
	I0203 14:18:25.792604    5571 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 14:18:25.796169    5571 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 14:18:25.796188    5571 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 14:18:25.796200    5571 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 14:18:25.796206    5571 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 14:18:25.796216    5571 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
	I0203 14:18:25.796338    5571 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
	I0203 14:18:25.796514    5571 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
	I0203 14:18:25.796520    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> /etc/ssl/certs/25682.pem
	I0203 14:18:25.796718    5571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 14:18:25.804161    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
	I0203 14:18:25.821152    5571 start.go:303] post-start completed in 177.858737ms
	I0203 14:18:25.821668    5571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-802000
	I0203 14:18:25.879231    5571 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/config.json ...
	I0203 14:18:25.879645    5571 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 14:18:25.879703    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:25.935822    5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
	I0203 14:18:26.025442    5571 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 14:18:26.029858    5571 start.go:128] duration metric: createHost completed in 10.483713619s
	I0203 14:18:26.029874    5571 start.go:83] releasing machines lock for "ingress-addon-legacy-802000", held for 10.483859799s
	I0203 14:18:26.029949    5571 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-802000
	I0203 14:18:26.085605    5571 ssh_runner.go:195] Run: cat /version.json
	I0203 14:18:26.085639    5571 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0203 14:18:26.085672    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:26.085710    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:26.147127    5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
	I0203 14:18:26.147297    5571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
	I0203 14:18:26.428525    5571 ssh_runner.go:195] Run: systemctl --version
	I0203 14:18:26.433390    5571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 14:18:26.438275    5571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 14:18:26.458101    5571 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 14:18:26.458182    5571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0203 14:18:26.471878    5571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0203 14:18:26.479454    5571 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0203 14:18:26.479470    5571 start.go:483] detecting cgroup driver to use...
	I0203 14:18:26.479482    5571 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 14:18:26.479570    5571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 14:18:26.492705    5571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0203 14:18:26.501122    5571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 14:18:26.509613    5571 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 14:18:26.509668    5571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 14:18:26.518054    5571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 14:18:26.526198    5571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 14:18:26.534377    5571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 14:18:26.542633    5571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 14:18:26.550511    5571 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 14:18:26.558669    5571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 14:18:26.565918    5571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 14:18:26.572818    5571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 14:18:26.636962    5571 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 14:18:26.711656    5571 start.go:483] detecting cgroup driver to use...
	I0203 14:18:26.711683    5571 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 14:18:26.711747    5571 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 14:18:26.721948    5571 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 14:18:26.722017    5571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 14:18:26.731877    5571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 14:18:26.746787    5571 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 14:18:26.855944    5571 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 14:18:26.945031    5571 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 14:18:26.945046    5571 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 14:18:26.958968    5571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 14:18:27.049286    5571 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 14:18:27.248310    5571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 14:18:27.277652    5571 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 14:18:27.349692    5571 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
	I0203 14:18:27.349858    5571 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-802000 dig +short host.docker.internal
	I0203 14:18:27.512009    5571 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0203 14:18:27.512135    5571 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0203 14:18:27.517099    5571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 14:18:27.527048    5571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:18:27.583724    5571 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0203 14:18:27.583810    5571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 14:18:27.607235    5571 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0203 14:18:27.607253    5571 docker.go:560] Images already preloaded, skipping extraction
	I0203 14:18:27.607351    5571 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 14:18:27.630620    5571 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0203 14:18:27.630640    5571 cache_images.go:84] Images are preloaded, skipping loading
	I0203 14:18:27.630725    5571 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 14:18:27.701917    5571 cni.go:84] Creating CNI manager for ""
	I0203 14:18:27.701935    5571 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 14:18:27.701972    5571 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 14:18:27.701988    5571 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-802000 NodeName:ingress-addon-legacy-802000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 14:18:27.702110    5571 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-802000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 14:18:27.702206    5571 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-802000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-802000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0203 14:18:27.702271    5571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0203 14:18:27.710067    5571 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 14:18:27.710149    5571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 14:18:27.717521    5571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0203 14:18:27.730507    5571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0203 14:18:27.743163    5571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0203 14:18:27.755783    5571 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0203 14:18:27.759538    5571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 14:18:27.769176    5571 certs.go:56] Setting up /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000 for IP: 192.168.49.2
	I0203 14:18:27.769197    5571 certs.go:186] acquiring lock for shared ca certs: {Name:mkdec04c6cc16ac0dcab0ae849b602e6c1942576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:18:27.769380    5571 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key
	I0203 14:18:27.769455    5571 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key
	I0203 14:18:27.769501    5571 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.key
	I0203 14:18:27.769516    5571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.crt with IP's: []
	I0203 14:18:27.835880    5571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.crt ...
	I0203 14:18:27.835889    5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.crt: {Name:mk4c7c93e45c89a6fe511fadc98f9279b780aec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:18:27.836169    5571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.key ...
	I0203 14:18:27.836183    5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/client.key: {Name:mkde103a9cc747f76d9558504ded1cb1c7da1102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:18:27.836383    5571 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key.dd3b5fb2
	I0203 14:18:27.836398    5571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0203 14:18:27.922651    5571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt.dd3b5fb2 ...
	I0203 14:18:27.922660    5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt.dd3b5fb2: {Name:mkd88ebbf7f7bbc9fd988d78230372388c0af50c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:18:27.922872    5571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key.dd3b5fb2 ...
	I0203 14:18:27.922880    5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key.dd3b5fb2: {Name:mk7c847e8222904c3cb69c64c6bf7cf0ef2c3015 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:18:27.923095    5571 certs.go:333] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt
	I0203 14:18:27.923283    5571 certs.go:337] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key
	I0203 14:18:27.923451    5571 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key
	I0203 14:18:27.923466    5571 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt with IP's: []
	I0203 14:18:28.059984    5571 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt ...
	I0203 14:18:28.059993    5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt: {Name:mk30ef9499cded1193335db2e2ed3e4a9595a1e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:18:28.060227    5571 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key ...
	I0203 14:18:28.060235    5571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key: {Name:mke14c44f1625324c23ce50fbd2bf2ea5215aacf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:18:28.060417    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0203 14:18:28.060446    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0203 14:18:28.060466    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0203 14:18:28.060485    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0203 14:18:28.060505    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0203 14:18:28.060523    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0203 14:18:28.060540    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0203 14:18:28.060557    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0203 14:18:28.060663    5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem (1338 bytes)
	W0203 14:18:28.060718    5571 certs.go:397] ignoring /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568_empty.pem, impossibly tiny 0 bytes
	I0203 14:18:28.060729    5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem (1675 bytes)
	I0203 14:18:28.060761    5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem (1078 bytes)
	I0203 14:18:28.060795    5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem (1123 bytes)
	I0203 14:18:28.060824    5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem (1675 bytes)
	I0203 14:18:28.060892    5571 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem (1708 bytes)
	I0203 14:18:28.060929    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:18:28.060949    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem -> /usr/share/ca-certificates/2568.pem
	I0203 14:18:28.060966    5571 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> /usr/share/ca-certificates/25682.pem
	I0203 14:18:28.061491    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 14:18:28.080164    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 14:18:28.097134    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 14:18:28.114042    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/ingress-addon-legacy-802000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 14:18:28.130819    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 14:18:28.147873    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 14:18:28.164854    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 14:18:28.181969    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 14:18:28.198845    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 14:18:28.216092    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem --> /usr/share/ca-certificates/2568.pem (1338 bytes)
	I0203 14:18:28.233172    5571 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /usr/share/ca-certificates/25682.pem (1708 bytes)
	I0203 14:18:28.250146    5571 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 14:18:28.262895    5571 ssh_runner.go:195] Run: openssl version
	I0203 14:18:28.268348    5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 14:18:28.276498    5571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:18:28.280495    5571 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:18:28.280543    5571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:18:28.286044    5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 14:18:28.294223    5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2568.pem && ln -fs /usr/share/ca-certificates/2568.pem /etc/ssl/certs/2568.pem"
	I0203 14:18:28.302244    5571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2568.pem
	I0203 14:18:28.306285    5571 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:13 /usr/share/ca-certificates/2568.pem
	I0203 14:18:28.306330    5571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2568.pem
	I0203 14:18:28.311792    5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2568.pem /etc/ssl/certs/51391683.0"
	I0203 14:18:28.319804    5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25682.pem && ln -fs /usr/share/ca-certificates/25682.pem /etc/ssl/certs/25682.pem"
	I0203 14:18:28.327917    5571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25682.pem
	I0203 14:18:28.331821    5571 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:13 /usr/share/ca-certificates/25682.pem
	I0203 14:18:28.331864    5571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25682.pem
	I0203 14:18:28.337331    5571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25682.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 14:18:28.345469    5571 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-802000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-802000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:18:28.345580    5571 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 14:18:28.368922    5571 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 14:18:28.376741    5571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 14:18:28.384282    5571 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 14:18:28.384334    5571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 14:18:28.391687    5571 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 14:18:28.391711    5571 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 14:18:28.439088    5571 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0203 14:18:28.439139    5571 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 14:18:28.737153    5571 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 14:18:28.737236    5571 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 14:18:28.737320    5571 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 14:18:28.961399    5571 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 14:18:28.962087    5571 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 14:18:28.962125    5571 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0203 14:18:29.033201    5571 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 14:18:29.054996    5571 out.go:204]   - Generating certificates and keys ...
	I0203 14:18:29.055096    5571 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 14:18:29.055187    5571 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 14:18:29.138062    5571 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 14:18:29.222299    5571 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0203 14:18:29.382974    5571 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0203 14:18:29.521554    5571 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0203 14:18:29.663269    5571 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0203 14:18:29.663423    5571 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0203 14:18:29.925220    5571 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0203 14:18:29.925401    5571 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0203 14:18:30.152716    5571 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 14:18:30.284926    5571 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 14:18:30.383106    5571 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0203 14:18:30.383175    5571 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 14:18:30.474922    5571 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 14:18:30.533920    5571 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 14:18:30.661920    5571 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 14:18:31.023709    5571 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 14:18:31.024235    5571 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 14:18:31.045755    5571 out.go:204]   - Booting up control plane ...
	I0203 14:18:31.045855    5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 14:18:31.045936    5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 14:18:31.046004    5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 14:18:31.046076    5571 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 14:18:31.046206    5571 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 14:19:11.035228    5571 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 14:19:11.036612    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:19:11.036825    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:19:16.037283    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:19:16.037444    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:19:26.039435    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:19:26.039667    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:19:46.040217    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:19:46.040373    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:20:26.043133    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:20:26.043353    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:20:26.043365    5571 kubeadm.go:322] 
	I0203 14:20:26.043417    5571 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0203 14:20:26.043486    5571 kubeadm.go:322] 		timed out waiting for the condition
	I0203 14:20:26.043508    5571 kubeadm.go:322] 
	I0203 14:20:26.043545    5571 kubeadm.go:322] 	This error is likely caused by:
	I0203 14:20:26.043590    5571 kubeadm.go:322] 		- The kubelet is not running
	I0203 14:20:26.043694    5571 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 14:20:26.043704    5571 kubeadm.go:322] 
	I0203 14:20:26.043858    5571 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 14:20:26.043896    5571 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0203 14:20:26.043928    5571 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0203 14:20:26.043933    5571 kubeadm.go:322] 
	I0203 14:20:26.044060    5571 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 14:20:26.044156    5571 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 14:20:26.044171    5571 kubeadm.go:322] 
	I0203 14:20:26.044276    5571 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0203 14:20:26.044351    5571 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0203 14:20:26.044441    5571 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0203 14:20:26.044497    5571 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0203 14:20:26.044506    5571 kubeadm.go:322] 
	I0203 14:20:26.047258    5571 kubeadm.go:322] W0203 22:18:28.438298    1164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0203 14:20:26.047406    5571 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 14:20:26.047457    5571 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 14:20:26.047563    5571 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
	I0203 14:20:26.047651    5571 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 14:20:26.047777    5571 kubeadm.go:322] W0203 22:18:31.028757    1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0203 14:20:26.047879    5571 kubeadm.go:322] W0203 22:18:31.029929    1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0203 14:20:26.047946    5571 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 14:20:26.048018    5571 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0203 14:20:26.048224    5571 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0203 22:18:28.438298    1164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0203 22:18:31.028757    1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0203 22:18:31.029929    1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-802000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0203 22:18:28.438298    1164 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0203 22:18:31.028757    1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0203 22:18:31.029929    1164 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 14:20:26.048264    5571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0203 14:20:26.463049    5571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 14:20:26.472673    5571 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 14:20:26.472729    5571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 14:20:26.480118    5571 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 14:20:26.480139    5571 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 14:20:26.527819    5571 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0203 14:20:26.527878    5571 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 14:20:26.817279    5571 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 14:20:26.817400    5571 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 14:20:26.817528    5571 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 14:20:27.036412    5571 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 14:20:27.036838    5571 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 14:20:27.036873    5571 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0203 14:20:27.106940    5571 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 14:20:27.130532    5571 out.go:204]   - Generating certificates and keys ...
	I0203 14:20:27.130661    5571 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 14:20:27.130726    5571 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 14:20:27.130814    5571 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 14:20:27.130889    5571 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0203 14:20:27.130949    5571 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 14:20:27.131013    5571 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0203 14:20:27.131072    5571 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0203 14:20:27.131124    5571 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0203 14:20:27.131199    5571 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 14:20:27.131289    5571 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 14:20:27.131342    5571 kubeadm.go:322] [certs] Using the existing "sa" key
	I0203 14:20:27.131415    5571 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 14:20:27.181116    5571 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 14:20:27.460691    5571 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 14:20:27.772680    5571 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 14:20:27.854719    5571 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 14:20:27.855252    5571 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 14:20:27.876826    5571 out.go:204]   - Booting up control plane ...
	I0203 14:20:27.877010    5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 14:20:27.877154    5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 14:20:27.877286    5571 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 14:20:27.877411    5571 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 14:20:27.877711    5571 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 14:21:07.871449    5571 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 14:21:07.872382    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:21:07.872604    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:21:12.873190    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:21:12.873367    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:21:22.875673    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:21:22.875877    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:21:42.876977    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:21:42.877133    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:22:22.880274    5571 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:22:22.880492    5571 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:22:22.880507    5571 kubeadm.go:322] 
	I0203 14:22:22.880548    5571 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0203 14:22:22.880610    5571 kubeadm.go:322] 		timed out waiting for the condition
	I0203 14:22:22.880636    5571 kubeadm.go:322] 
	I0203 14:22:22.880718    5571 kubeadm.go:322] 	This error is likely caused by:
	I0203 14:22:22.880759    5571 kubeadm.go:322] 		- The kubelet is not running
	I0203 14:22:22.880877    5571 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 14:22:22.880894    5571 kubeadm.go:322] 
	I0203 14:22:22.881010    5571 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 14:22:22.881071    5571 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0203 14:22:22.881126    5571 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0203 14:22:22.881130    5571 kubeadm.go:322] 
	I0203 14:22:22.881211    5571 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 14:22:22.881276    5571 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0203 14:22:22.881282    5571 kubeadm.go:322] 
	I0203 14:22:22.881363    5571 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0203 14:22:22.881413    5571 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0203 14:22:22.881486    5571 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0203 14:22:22.881514    5571 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0203 14:22:22.881520    5571 kubeadm.go:322] 
	I0203 14:22:22.884563    5571 kubeadm.go:322] W0203 22:20:26.526982    3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0203 14:22:22.884765    5571 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 14:22:22.884833    5571 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 14:22:22.884946    5571 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
	I0203 14:22:22.885030    5571 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 14:22:22.885120    5571 kubeadm.go:322] W0203 22:20:27.859197    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0203 14:22:22.885228    5571 kubeadm.go:322] W0203 22:20:27.859976    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0203 14:22:22.885309    5571 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 14:22:22.885366    5571 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0203 14:22:22.885411    5571 kubeadm.go:403] StartCluster complete in 3m54.525667948s
	I0203 14:22:22.885500    5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 14:22:22.907479    5571 logs.go:279] 0 containers: []
	W0203 14:22:22.907492    5571 logs.go:281] No container was found matching "kube-apiserver"
	I0203 14:22:22.907558    5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 14:22:22.929736    5571 logs.go:279] 0 containers: []
	W0203 14:22:22.929749    5571 logs.go:281] No container was found matching "etcd"
	I0203 14:22:22.929817    5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 14:22:22.951906    5571 logs.go:279] 0 containers: []
	W0203 14:22:22.951918    5571 logs.go:281] No container was found matching "coredns"
	I0203 14:22:22.951986    5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 14:22:22.974675    5571 logs.go:279] 0 containers: []
	W0203 14:22:22.974689    5571 logs.go:281] No container was found matching "kube-scheduler"
	I0203 14:22:22.974756    5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 14:22:22.999245    5571 logs.go:279] 0 containers: []
	W0203 14:22:22.999258    5571 logs.go:281] No container was found matching "kube-proxy"
	I0203 14:22:22.999338    5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 14:22:23.022331    5571 logs.go:279] 0 containers: []
	W0203 14:22:23.022345    5571 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 14:22:23.022412    5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 14:22:23.046201    5571 logs.go:279] 0 containers: []
	W0203 14:22:23.046216    5571 logs.go:281] No container was found matching "storage-provisioner"
	I0203 14:22:23.046300    5571 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 14:22:23.069025    5571 logs.go:279] 0 containers: []
	W0203 14:22:23.069039    5571 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 14:22:23.069046    5571 logs.go:124] Gathering logs for dmesg ...
	I0203 14:22:23.069057    5571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 14:22:23.081102    5571 logs.go:124] Gathering logs for describe nodes ...
	I0203 14:22:23.081115    5571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 14:22:23.134484    5571 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 14:22:23.134496    5571 logs.go:124] Gathering logs for Docker ...
	I0203 14:22:23.134507    5571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 14:22:23.151630    5571 logs.go:124] Gathering logs for container status ...
	I0203 14:22:23.151646    5571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 14:22:25.203035    5571 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051321976s)
	I0203 14:22:25.203149    5571 logs.go:124] Gathering logs for kubelet ...
	I0203 14:22:25.203157    5571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0203 14:22:25.241728    5571 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0203 22:20:26.526982    3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0203 22:20:27.859197    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0203 22:20:27.859976    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 14:22:25.241749    5571 out.go:239] * 
	* 
	W0203 14:22:25.241866    5571 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0203 22:20:26.526982    3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0203 22:20:27.859197    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0203 22:20:27.859976    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0203 22:20:26.526982    3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0203 22:20:27.859197    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0203 22:20:27.859976    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 14:22:25.241879    5571 out.go:239] * 
	* 
	W0203 14:22:25.242523    5571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 14:22:25.305410    5571 out.go:177] 
	W0203 14:22:25.348597    5571 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0203 22:20:26.526982    3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0203 22:20:27.859197    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0203 22:20:27.859976    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0203 22:20:26.526982    3653 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0203 22:20:27.859197    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0203 22:20:27.859976    3653 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 14:22:25.348776    5571 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 14:22:25.348871    5571 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 14:22:25.391192    5571 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-802000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (256.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-802000 addons enable ingress --alsologtostderr -v=5
E0203 14:22:32.599971    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:23:54.524403    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-802000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.140367274s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 14:22:25.537541    5904 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:22:25.537892    5904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:22:25.537897    5904 out.go:309] Setting ErrFile to fd 2...
	I0203 14:22:25.537901    5904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:22:25.538009    5904 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:22:25.559705    5904 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0203 14:22:25.580925    5904 config.go:180] Loaded profile config "ingress-addon-legacy-802000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0203 14:22:25.580946    5904 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-802000"
	I0203 14:22:25.580956    5904 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-802000"
	I0203 14:22:25.581247    5904 host.go:66] Checking if "ingress-addon-legacy-802000" exists ...
	I0203 14:22:25.581767    5904 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Status}}
	I0203 14:22:25.661218    5904 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0203 14:22:25.682341    5904 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0203 14:22:25.704295    5904 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0203 14:22:25.726114    5904 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0203 14:22:25.747223    5904 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0203 14:22:25.747273    5904 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0203 14:22:25.747386    5904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:22:25.804484    5904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
	I0203 14:22:25.901897    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:25.953057    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:25.953081    5904 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:26.230335    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:26.284141    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:26.284157    5904 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:26.826598    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:26.879907    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:26.879925    5904 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:27.535299    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:27.589797    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:27.589819    5904 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:28.381595    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:28.435005    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:28.435021    5904 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:29.606042    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:29.659166    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:29.659181    5904 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:31.914611    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:31.970265    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:31.970280    5904 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:33.583421    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:33.637005    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:33.637019    5904 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:36.443650    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:36.496940    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:36.496955    5904 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:40.322425    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:40.375448    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:40.375464    5904 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:48.073311    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:22:48.126737    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:22:48.126755    5904 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:02.763921    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:23:02.819209    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:02.819225    5904 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:31.227112    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:23:31.281873    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:31.281889    5904 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:54.453102    5904 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0203 14:23:54.508181    5904 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:54.508211    5904 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-802000"
	I0203 14:23:54.531823    5904 out.go:177] * Verifying ingress addon...
	I0203 14:23:54.553148    5904 out.go:177] 
	W0203 14:23:54.574915    5904 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-802000" does not exist: client config: context "ingress-addon-legacy-802000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-802000" does not exist: client config: context "ingress-addon-legacy-802000" does not exist]
	W0203 14:23:54.574943    5904 out.go:239] * 
	* 
	W0203 14:23:54.578675    5904 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 14:23:54.599859    5904 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-802000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-802000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23",
	        "Created": "2023-02-03T22:18:22.774129393Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 48730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T22:18:23.063093727Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/hostname",
	        "HostsPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/hosts",
	        "LogPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23-json.log",
	        "Name": "/ingress-addon-legacy-802000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-802000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-802000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-802000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-802000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-802000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-802000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-802000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc234c4aa42175cb39bfff39cc9cbedf4e32c05fe85cd3624f05b268d848def5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50695"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50696"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50697"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50698"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50699"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cc234c4aa421",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-802000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "09bf894094d9",
	                        "ingress-addon-legacy-802000"
	                    ],
	                    "NetworkID": "0ab16039434105ed2a0568d69d761fee3033d0921bf456227aae4c8b5be74729",
	                    "EndpointID": "7011b283cc97d45217626d79047bb75811c0f15fe29d8d674b57fc78cd8ef6c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-802000 -n ingress-addon-legacy-802000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-802000 -n ingress-addon-legacy-802000: exit status 6 (400.795445ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 14:23:55.074427    5994 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-802000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-802000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-802000 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-802000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.064752312s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 14:23:55.139923    6006 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:23:55.140188    6006 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:23:55.140193    6006 out.go:309] Setting ErrFile to fd 2...
	I0203 14:23:55.140198    6006 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:23:55.140306    6006 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:23:55.162434    6006 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0203 14:23:55.184553    6006 config.go:180] Loaded profile config "ingress-addon-legacy-802000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0203 14:23:55.184574    6006 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-802000"
	I0203 14:23:55.184585    6006 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-802000"
	I0203 14:23:55.184930    6006 host.go:66] Checking if "ingress-addon-legacy-802000" exists ...
	I0203 14:23:55.185603    6006 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-802000 --format={{.State.Status}}
	I0203 14:23:55.263435    6006 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0203 14:23:55.285392    6006 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0203 14:23:55.307181    6006 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0203 14:23:55.307219    6006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0203 14:23:55.307375    6006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-802000
	I0203 14:23:55.365935    6006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50695 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/ingress-addon-legacy-802000/id_rsa Username:docker}
	I0203 14:23:55.464016    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:23:55.514494    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:55.514518    6006 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:55.793055    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:23:55.847747    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:55.847762    6006 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:56.388096    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:23:56.439517    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:56.439536    6006 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:57.094795    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:23:57.146720    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:57.146739    6006 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:57.938608    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:23:57.990840    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:57.990856    6006 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:59.163458    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:23:59.216709    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:23:59.216723    6006 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:01.472175    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:24:01.527124    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:01.527139    6006 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:03.138129    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:24:03.190394    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:03.190410    6006 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:05.995664    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:24:06.052708    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:06.052723    6006 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:09.877918    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:24:09.929134    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:09.929150    6006 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:17.627868    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:24:17.682976    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:17.682991    6006 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:32.319957    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:24:32.375756    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:24:32.375770    6006 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:25:00.784650    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:25:00.838731    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:25:00.838748    6006 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:25:24.008031    6006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0203 14:25:24.063645    6006 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0203 14:25:24.085348    6006 out.go:177] 
	W0203 14:25:24.106137    6006 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0203 14:25:24.106155    6006 out.go:239] * 
	* 
	W0203 14:25:24.108485    6006 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 14:25:24.129181    6006 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-802000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-802000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23",
	        "Created": "2023-02-03T22:18:22.774129393Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 48730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T22:18:23.063093727Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/hostname",
	        "HostsPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/hosts",
	        "LogPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23-json.log",
	        "Name": "/ingress-addon-legacy-802000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-802000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-802000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-802000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-802000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-802000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-802000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-802000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc234c4aa42175cb39bfff39cc9cbedf4e32c05fe85cd3624f05b268d848def5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50695"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50696"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50697"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50698"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50699"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cc234c4aa421",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-802000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "09bf894094d9",
	                        "ingress-addon-legacy-802000"
	                    ],
	                    "NetworkID": "0ab16039434105ed2a0568d69d761fee3033d0921bf456227aae4c8b5be74729",
	                    "EndpointID": "7011b283cc97d45217626d79047bb75811c0f15fe29d8d674b57fc78cd8ef6c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-802000 -n ingress-addon-legacy-802000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-802000 -n ingress-addon-legacy-802000: exit status 6 (396.066318ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 14:25:24.596841    6096 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-802000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-802000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-802000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-802000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23",
	        "Created": "2023-02-03T22:18:22.774129393Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 48730,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T22:18:23.063093727Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/hostname",
	        "HostsPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/hosts",
	        "LogPath": "/var/lib/docker/containers/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23/09bf894094d95ab07ba59cf1e55b0caf42193ea8d1073124b3a035075e790d23-json.log",
	        "Name": "/ingress-addon-legacy-802000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-802000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-802000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4945d28674bc773d043c8da0fa5d378da76c05628f9b1fcf340f6a9c239bbb5d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-802000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-802000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-802000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-802000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-802000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc234c4aa42175cb39bfff39cc9cbedf4e32c05fe85cd3624f05b268d848def5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50695"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50696"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50697"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50698"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50699"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cc234c4aa421",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-802000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "09bf894094d9",
	                        "ingress-addon-legacy-802000"
	                    ],
	                    "NetworkID": "0ab16039434105ed2a0568d69d761fee3033d0921bf456227aae4c8b5be74729",
	                    "EndpointID": "7011b283cc97d45217626d79047bb75811c0f15fe29d8d674b57fc78cd8ef6c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-802000 -n ingress-addon-legacy-802000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-802000 -n ingress-addon-legacy-802000: exit status 6 (397.128878ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 14:25:25.053517    6108 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-802000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-802000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.715224590.exe start -p running-upgrade-917000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.715224590.exe start -p running-upgrade-917000 --memory=2200 --vm-driver=docker : exit status 70 (54.268434498s)

                                                
                                                
-- stdout --
	! [running-upgrade-917000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig756117889
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:45:10.284987839 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-917000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:45:29.650855941 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-917000", then "minikube start -p running-upgrade-917000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.39 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 84.94 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 138.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 156.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 224.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 238.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 281.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 293.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 325.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 343.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 359.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 374.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 388.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 416.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 434.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 460.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 473.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 490.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 506.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:45:29.650855941 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.715224590.exe start -p running-upgrade-917000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.715224590.exe start -p running-upgrade-917000 --memory=2200 --vm-driver=docker : exit status 70 (4.243820741s)

                                                
                                                
-- stdout --
	* [running-upgrade-917000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig460096561
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-917000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.715224590.exe start -p running-upgrade-917000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.715224590.exe start -p running-upgrade-917000 --memory=2200 --vm-driver=docker : exit status 70 (4.406210127s)

                                                
                                                
-- stdout --
	* [running-upgrade-917000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3946064934
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-917000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-02-03 14:45:44.055227 -0800 PST m=+2282.786736700
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-917000
helpers_test.go:235: (dbg) docker inspect running-upgrade-917000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e32568bf2ef491ab373bdb6b4441af8e0ad614a5f8c5265cfdbae9e798f99b06",
	        "Created": "2023-02-03T22:45:18.515328636Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 172700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T22:45:18.777241648Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/e32568bf2ef491ab373bdb6b4441af8e0ad614a5f8c5265cfdbae9e798f99b06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e32568bf2ef491ab373bdb6b4441af8e0ad614a5f8c5265cfdbae9e798f99b06/hostname",
	        "HostsPath": "/var/lib/docker/containers/e32568bf2ef491ab373bdb6b4441af8e0ad614a5f8c5265cfdbae9e798f99b06/hosts",
	        "LogPath": "/var/lib/docker/containers/e32568bf2ef491ab373bdb6b4441af8e0ad614a5f8c5265cfdbae9e798f99b06/e32568bf2ef491ab373bdb6b4441af8e0ad614a5f8c5265cfdbae9e798f99b06-json.log",
	        "Name": "/running-upgrade-917000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-917000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/369518be8067bda959960e79ad71379fe41fc2629802927145f910eb307a190d-init/diff:/var/lib/docker/overlay2/0e45eea7f3fb4962af92006f1e50e7e1da5c85efa57d6aa3026f0ceb6e570b13/diff:/var/lib/docker/overlay2/c4a202f224a13cbb1a3c83e83a9a87b0fee6291f1aa9044b2bd01f7977c702fe/diff:/var/lib/docker/overlay2/b42f579467ea0803828df2cb72a179577a360ffc0a043910d0b1b0ab083b1773/diff:/var/lib/docker/overlay2/2eb7e4f1831bd2b2aac8391fb5f73c949b5b7d0a99cdd12e902d50aaf06c5cd2/diff:/var/lib/docker/overlay2/a12c9308abebef887cfaffb957c3dedda7b18bf2f4bec1d2b757a38b571a49f5/diff:/var/lib/docker/overlay2/8dded86ab9bfc2e181766326dfc1228a773720c621ef760a5943b059a74b5382/diff:/var/lib/docker/overlay2/0f9ed804492884efd49f2d26ebcf8a4af978522ae9c03128eff86109dabb8a7e/diff:/var/lib/docker/overlay2/dc13b340ca01b6f458386eb447441c8ab4fd38217e83efec290e3e258a5f127a/diff:/var/lib/docker/overlay2/476224c17de9ec09306385aa99af28a3dcca086e06168e8ff795796b08209bec/diff:/var/lib/docker/overlay2/c31373
437066fa8cb8716806dd01edd6f166098662b75b09a1401ad1e82de00b/diff:/var/lib/docker/overlay2/8a90b043c23a109c365402618d64f0bc61c99600a5f33f59fc23aa397ef7359d/diff:/var/lib/docker/overlay2/acc163d177a8160322a6263a046bdf4b27fec8a6338c413a1a9b6cead1df053e/diff:/var/lib/docker/overlay2/6fdb9b7b2a0a20ad1e74d64834c0ca968548b83c2b9dc0a6102d76cc40fc73c1/diff:/var/lib/docker/overlay2/1fc3b3f057ad56bd36d87c66e13d2eb3f8d2f8d42b78f994a41190966398230d/diff:/var/lib/docker/overlay2/7c77adf70fdd0620f690efce220c3c7cf524af3c35c26fe756c8594a4d8661cf/diff:/var/lib/docker/overlay2/99e3af7f7732d41e329ccbd3d67d8012be36ee1a30cb8a3333f8c3ba9d1bc2c6/diff:/var/lib/docker/overlay2/acdc6195f10a56c56c1d1ac87e2109fe9858322fecdb507fb88ed23a6acfd210/diff:/var/lib/docker/overlay2/c1a5824ac19243cc33ef6fc824d95ff7d32ab972f633a808667f84945c179ba0/diff:/var/lib/docker/overlay2/18e84590ec3ac1be497fcfb52de9ce1c04a8888ffc87279fcf7d7bd1a4547ef9/diff:/var/lib/docker/overlay2/46d5e1b43a5e1732c6b3a3c8cd84333e267a4742f32950d149a92508fcbad55f/diff:/var/lib/d
ocker/overlay2/54befe217e5b1fd508e83940924934465ce90d988a724bcc5a560957ff01e649/diff",
	                "MergedDir": "/var/lib/docker/overlay2/369518be8067bda959960e79ad71379fe41fc2629802927145f910eb307a190d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/369518be8067bda959960e79ad71379fe41fc2629802927145f910eb307a190d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/369518be8067bda959960e79ad71379fe41fc2629802927145f910eb307a190d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-917000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-917000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-917000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-917000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-917000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "48634fb63608b5b0fcd396737835f611268f05b2414a3c3497dc2ece1d508ce8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52785"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52786"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52787"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/48634fb63608",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "b76916bcb148c3132fa243cc29cd9b778b68578e2e660c4208e3c4909c5ebb7a",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "83bb67533b6070e1c8f957427f8c719b1a829c4c7551ecb7db2a7401a6fee8e7",
	                    "EndpointID": "b76916bcb148c3132fa243cc29cd9b778b68578e2e660c4208e3c4909c5ebb7a",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-917000 -n running-upgrade-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-917000 -n running-upgrade-917000: exit status 6 (388.129886ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 14:45:44.491120   12835 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-917000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-917000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-917000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-917000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-917000: (2.347310713s)
--- FAIL: TestRunningBinaryUpgrade (69.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (588.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0203 14:46:45.898221    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:56.138665    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m11.140339091s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-759000 in cluster kubernetes-upgrade-759000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 14:46:41.491552   13217 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:46:41.491721   13217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:46:41.491726   13217 out.go:309] Setting ErrFile to fd 2...
	I0203 14:46:41.491730   13217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:46:41.491844   13217 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:46:41.492348   13217 out.go:303] Setting JSON to false
	I0203 14:46:41.510610   13217 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":2776,"bootTime":1675461625,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 14:46:41.510698   13217 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 14:46:41.533037   13217 out.go:177] * [kubernetes-upgrade-759000] minikube v1.29.0 on Darwin 13.2
	I0203 14:46:41.575643   13217 notify.go:220] Checking for updates...
	I0203 14:46:41.598327   13217 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 14:46:41.619479   13217 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 14:46:41.640513   13217 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 14:46:41.661399   13217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 14:46:41.682650   13217 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 14:46:41.703585   13217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 14:46:41.725364   13217 config.go:180] Loaded profile config "cert-expiration-895000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 14:46:41.725490   13217 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 14:46:41.786610   13217 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 14:46:41.786740   13217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:46:41.927228   13217 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 22:46:41.836095572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:46:41.971021   13217 out.go:177] * Using the docker driver based on user configuration
	I0203 14:46:41.992878   13217 start.go:296] selected driver: docker
	I0203 14:46:41.992962   13217 start.go:857] validating driver "docker" against <nil>
	I0203 14:46:41.992980   13217 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 14:46:41.996872   13217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:46:42.138658   13217 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 22:46:42.045975795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:46:42.138779   13217 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0203 14:46:42.138965   13217 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 14:46:42.160791   13217 out.go:177] * Using Docker Desktop driver with root privileges
	I0203 14:46:42.182506   13217 cni.go:84] Creating CNI manager for ""
	I0203 14:46:42.182541   13217 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 14:46:42.182555   13217 start_flags.go:319] config:
	{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:46:42.242486   13217 out.go:177] * Starting control plane node kubernetes-upgrade-759000 in cluster kubernetes-upgrade-759000
	I0203 14:46:42.263348   13217 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 14:46:42.300428   13217 out.go:177] * Pulling base image ...
	I0203 14:46:42.359418   13217 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 14:46:42.359519   13217 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 14:46:42.359550   13217 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0203 14:46:42.359586   13217 cache.go:57] Caching tarball of preloaded images
	I0203 14:46:42.359825   13217 preload.go:174] Found /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 14:46:42.359848   13217 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0203 14:46:42.360864   13217 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/config.json ...
	I0203 14:46:42.361004   13217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/config.json: {Name:mk6206d1d0dbf95bb038da088c5a77eabb095413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:46:42.416614   13217 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 14:46:42.416633   13217 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 14:46:42.416654   13217 cache.go:193] Successfully downloaded all kic artifacts
	I0203 14:46:42.416704   13217 start.go:364] acquiring machines lock for kubernetes-upgrade-759000: {Name:mk5f848790508fedfa98f2725b7e74e2ab8d4737 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 14:46:42.416860   13217 start.go:368] acquired machines lock for "kubernetes-upgrade-759000" in 143.709µs
	I0203 14:46:42.416889   13217 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-759000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 14:46:42.416954   13217 start.go:125] createHost starting for "" (driver="docker")
	I0203 14:46:42.459694   13217 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0203 14:46:42.459982   13217 start.go:159] libmachine.API.Create for "kubernetes-upgrade-759000" (driver="docker")
	I0203 14:46:42.460035   13217 client.go:168] LocalClient.Create starting
	I0203 14:46:42.460157   13217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem
	I0203 14:46:42.460206   13217 main.go:141] libmachine: Decoding PEM data...
	I0203 14:46:42.460225   13217 main.go:141] libmachine: Parsing certificate...
	I0203 14:46:42.460286   13217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem
	I0203 14:46:42.460319   13217 main.go:141] libmachine: Decoding PEM data...
	I0203 14:46:42.460334   13217 main.go:141] libmachine: Parsing certificate...
	I0203 14:46:42.460791   13217 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-759000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0203 14:46:42.551999   13217 cli_runner.go:211] docker network inspect kubernetes-upgrade-759000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0203 14:46:42.552088   13217 network_create.go:281] running [docker network inspect kubernetes-upgrade-759000] to gather additional debugging logs...
	I0203 14:46:42.552103   13217 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-759000
	W0203 14:46:42.605880   13217 cli_runner.go:211] docker network inspect kubernetes-upgrade-759000 returned with exit code 1
	I0203 14:46:42.605904   13217 network_create.go:284] error running [docker network inspect kubernetes-upgrade-759000]: docker network inspect kubernetes-upgrade-759000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-759000
	I0203 14:46:42.605917   13217 network_create.go:286] output of [docker network inspect kubernetes-upgrade-759000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-759000
	
	** /stderr **
	I0203 14:46:42.605996   13217 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 14:46:42.660891   13217 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0203 14:46:42.661224   13217 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011fec80}
	I0203 14:46:42.661236   13217 network_create.go:123] attempt to create docker network kubernetes-upgrade-759000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0203 14:46:42.661310   13217 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 kubernetes-upgrade-759000
	W0203 14:46:42.715999   13217 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 kubernetes-upgrade-759000 returned with exit code 1
	W0203 14:46:42.716028   13217 network_create.go:148] failed to create docker network kubernetes-upgrade-759000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 kubernetes-upgrade-759000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0203 14:46:42.716044   13217 network_create.go:115] failed to create docker network kubernetes-upgrade-759000 192.168.58.0/24, will retry: subnet is taken
	I0203 14:46:42.717512   13217 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0203 14:46:42.717849   13217 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00107b9a0}
	I0203 14:46:42.717860   13217 network_create.go:123] attempt to create docker network kubernetes-upgrade-759000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0203 14:46:42.717936   13217 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 kubernetes-upgrade-759000
	W0203 14:46:42.774351   13217 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 kubernetes-upgrade-759000 returned with exit code 1
	W0203 14:46:42.774386   13217 network_create.go:148] failed to create docker network kubernetes-upgrade-759000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 kubernetes-upgrade-759000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0203 14:46:42.774401   13217 network_create.go:115] failed to create docker network kubernetes-upgrade-759000 192.168.67.0/24, will retry: subnet is taken
	I0203 14:46:42.775727   13217 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0203 14:46:42.776044   13217 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00131e790}
	I0203 14:46:42.776058   13217 network_create.go:123] attempt to create docker network kubernetes-upgrade-759000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0203 14:46:42.776127   13217 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 kubernetes-upgrade-759000
	I0203 14:46:42.864980   13217 network_create.go:107] docker network kubernetes-upgrade-759000 192.168.76.0/24 created
	I0203 14:46:42.865010   13217 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-759000" container
	I0203 14:46:42.865114   13217 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0203 14:46:42.920691   13217 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-759000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 --label created_by.minikube.sigs.k8s.io=true
	I0203 14:46:42.975420   13217 oci.go:103] Successfully created a docker volume kubernetes-upgrade-759000
	I0203 14:46:42.975556   13217 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-759000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 --entrypoint /usr/bin/test -v kubernetes-upgrade-759000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -d /var/lib
	I0203 14:46:43.558317   13217 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-759000
	I0203 14:46:43.558344   13217 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 14:46:43.558358   13217 kic.go:190] Starting extracting preloaded images to volume ...
	I0203 14:46:43.558476   13217 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-759000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0203 14:46:49.099399   13217 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-759000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.540741348s)
	I0203 14:46:49.099419   13217 kic.go:199] duration metric: took 5.540946 seconds to extract preloaded images to volume
	I0203 14:46:49.099542   13217 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0203 14:46:49.242024   13217 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-759000 --name kubernetes-upgrade-759000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-759000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-759000 --network kubernetes-upgrade-759000 --ip 192.168.76.2 --volume kubernetes-upgrade-759000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8
	I0203 14:46:49.597901   13217 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-759000 --format={{.State.Running}}
	I0203 14:46:49.657082   13217 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-759000 --format={{.State.Status}}
	I0203 14:46:49.718183   13217 cli_runner.go:164] Run: docker exec kubernetes-upgrade-759000 stat /var/lib/dpkg/alternatives/iptables
	I0203 14:46:49.839690   13217 oci.go:144] the created container "kubernetes-upgrade-759000" has a running status.
	I0203 14:46:49.839728   13217 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa...
	I0203 14:46:49.922807   13217 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0203 14:46:50.030210   13217 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-759000 --format={{.State.Status}}
	I0203 14:46:50.092107   13217 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0203 14:46:50.092127   13217 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-759000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0203 14:46:50.193953   13217 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-759000 --format={{.State.Status}}
	I0203 14:46:50.250198   13217 machine.go:88] provisioning docker machine ...
	I0203 14:46:50.250245   13217 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-759000"
	I0203 14:46:50.250348   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:50.307669   13217 main.go:141] libmachine: Using SSH client type: native
	I0203 14:46:50.307865   13217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52902 <nil> <nil>}
	I0203 14:46:50.307878   13217 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-759000 && echo "kubernetes-upgrade-759000" | sudo tee /etc/hostname
	I0203 14:46:50.446955   13217 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-759000
	
	I0203 14:46:50.447038   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:50.505553   13217 main.go:141] libmachine: Using SSH client type: native
	I0203 14:46:50.505715   13217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52902 <nil> <nil>}
	I0203 14:46:50.505727   13217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-759000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-759000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-759000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 14:46:50.635368   13217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 14:46:50.635386   13217 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
	I0203 14:46:50.635404   13217 ubuntu.go:177] setting up certificates
	I0203 14:46:50.635411   13217 provision.go:83] configureAuth start
	I0203 14:46:50.635490   13217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-759000
	I0203 14:46:50.694754   13217 provision.go:138] copyHostCerts
	I0203 14:46:50.694855   13217 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
	I0203 14:46:50.694863   13217 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 14:46:50.694962   13217 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
	I0203 14:46:50.695153   13217 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
	I0203 14:46:50.695159   13217 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 14:46:50.695230   13217 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
	I0203 14:46:50.695377   13217 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
	I0203 14:46:50.695385   13217 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 14:46:50.695454   13217 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
	I0203 14:46:50.695573   13217 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-759000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-759000]
	I0203 14:46:50.814852   13217 provision.go:172] copyRemoteCerts
	I0203 14:46:50.814914   13217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 14:46:50.814966   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:50.872082   13217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52902 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0203 14:46:50.962878   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 14:46:50.980414   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 14:46:50.997771   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0203 14:46:51.015438   13217 provision.go:86] duration metric: configureAuth took 380.003093ms
	I0203 14:46:51.015455   13217 ubuntu.go:193] setting minikube options for container-runtime
	I0203 14:46:51.015604   13217 config.go:180] Loaded profile config "kubernetes-upgrade-759000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0203 14:46:51.015671   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:51.072391   13217 main.go:141] libmachine: Using SSH client type: native
	I0203 14:46:51.072568   13217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52902 <nil> <nil>}
	I0203 14:46:51.072582   13217 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 14:46:51.201905   13217 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 14:46:51.201924   13217 ubuntu.go:71] root file system type: overlay
	I0203 14:46:51.202054   13217 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 14:46:51.202138   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:51.259764   13217 main.go:141] libmachine: Using SSH client type: native
	I0203 14:46:51.259919   13217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52902 <nil> <nil>}
	I0203 14:46:51.259966   13217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 14:46:51.395985   13217 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 14:46:51.396086   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:51.452725   13217 main.go:141] libmachine: Using SSH client type: native
	I0203 14:46:51.452885   13217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52902 <nil> <nil>}
	I0203 14:46:51.452898   13217 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 14:46:52.050218   13217 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:46:51.393535734 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0203 14:46:52.050241   13217 machine.go:91] provisioned docker machine in 1.79997604s
	I0203 14:46:52.050247   13217 client.go:171] LocalClient.Create took 9.590007061s
	I0203 14:46:52.050267   13217 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-759000" took 9.590088379s
	I0203 14:46:52.050274   13217 start.go:300] post-start starting for "kubernetes-upgrade-759000" (driver="docker")
	I0203 14:46:52.050279   13217 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 14:46:52.050364   13217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 14:46:52.050449   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:52.108920   13217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52902 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0203 14:46:52.202119   13217 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 14:46:52.205936   13217 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 14:46:52.205950   13217 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 14:46:52.205957   13217 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 14:46:52.205961   13217 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 14:46:52.205972   13217 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
	I0203 14:46:52.206085   13217 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
	I0203 14:46:52.206261   13217 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
	I0203 14:46:52.206458   13217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 14:46:52.214000   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
	I0203 14:46:52.231314   13217 start.go:303] post-start completed in 181.024306ms
	I0203 14:46:52.231826   13217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-759000
	I0203 14:46:52.289574   13217 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/config.json ...
	I0203 14:46:52.290062   13217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 14:46:52.290119   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:52.346979   13217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52902 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0203 14:46:52.437455   13217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 14:46:52.442027   13217 start.go:128] duration metric: createHost completed in 10.024857148s
	I0203 14:46:52.442043   13217 start.go:83] releasing machines lock for "kubernetes-upgrade-759000", held for 10.024965347s
	I0203 14:46:52.442113   13217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-759000
	I0203 14:46:52.568773   13217 ssh_runner.go:195] Run: cat /version.json
	I0203 14:46:52.568789   13217 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0203 14:46:52.568855   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:52.568866   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:52.629270   13217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52902 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0203 14:46:52.629426   13217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52902 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0203 14:46:52.915743   13217 ssh_runner.go:195] Run: systemctl --version
	I0203 14:46:52.920540   13217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 14:46:52.925462   13217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 14:46:52.945565   13217 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 14:46:52.945631   13217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0203 14:46:52.959206   13217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0203 14:46:52.966805   13217 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0203 14:46:52.966828   13217 start.go:483] detecting cgroup driver to use...
	I0203 14:46:52.966841   13217 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 14:46:52.966936   13217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 14:46:52.980296   13217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0203 14:46:52.988691   13217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 14:46:52.997356   13217 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 14:46:52.997435   13217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 14:46:53.005770   13217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 14:46:53.014260   13217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 14:46:53.022775   13217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 14:46:53.031846   13217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 14:46:53.039927   13217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 14:46:53.048350   13217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 14:46:53.055775   13217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 14:46:53.063137   13217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 14:46:53.135840   13217 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 14:46:53.204580   13217 start.go:483] detecting cgroup driver to use...
	I0203 14:46:53.204600   13217 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 14:46:53.204667   13217 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 14:46:53.215072   13217 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 14:46:53.215133   13217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 14:46:53.226099   13217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 14:46:53.240757   13217 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 14:46:53.299197   13217 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 14:46:53.394991   13217 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 14:46:53.395015   13217 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 14:46:53.409124   13217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 14:46:53.493200   13217 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 14:46:53.707703   13217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 14:46:53.738446   13217 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 14:46:53.791533   13217 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0203 14:46:53.791681   13217 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-759000 dig +short host.docker.internal
	I0203 14:46:53.911814   13217 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0203 14:46:53.911910   13217 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0203 14:46:53.916423   13217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 14:46:53.926917   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:46:53.985736   13217 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 14:46:53.985819   13217 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 14:46:54.009385   13217 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0203 14:46:54.009411   13217 docker.go:560] Images already preloaded, skipping extraction
	I0203 14:46:54.009509   13217 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 14:46:54.032963   13217 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0203 14:46:54.032982   13217 cache_images.go:84] Images are preloaded, skipping loading
	I0203 14:46:54.033063   13217 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 14:46:54.104257   13217 cni.go:84] Creating CNI manager for ""
	I0203 14:46:54.104275   13217 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 14:46:54.104296   13217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 14:46:54.104315   13217 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-759000 NodeName:kubernetes-upgrade-759000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 14:46:54.104450   13217 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-759000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-759000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 14:46:54.104530   13217 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-759000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0203 14:46:54.104597   13217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0203 14:46:54.112893   13217 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 14:46:54.112960   13217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 14:46:54.120256   13217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0203 14:46:54.133180   13217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 14:46:54.146174   13217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0203 14:46:54.159309   13217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0203 14:46:54.163256   13217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 14:46:54.173351   13217 certs.go:56] Setting up /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000 for IP: 192.168.76.2
	I0203 14:46:54.173369   13217 certs.go:186] acquiring lock for shared ca certs: {Name:mkdec04c6cc16ac0dcab0ae849b602e6c1942576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:46:54.173569   13217 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key
	I0203 14:46:54.173634   13217 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key
	I0203 14:46:54.173686   13217 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.key
	I0203 14:46:54.173699   13217 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.crt with IP's: []
	I0203 14:46:54.262861   13217 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.crt ...
	I0203 14:46:54.262872   13217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.crt: {Name:mkba7b3d95bbeddbd105d0d6748daaed0465adb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:46:54.263202   13217 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.key ...
	I0203 14:46:54.263210   13217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.key: {Name:mk3b696c1bb5e87abeab362a853ac4d6d8587c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:46:54.263417   13217 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.key.31bdca25
	I0203 14:46:54.263432   13217 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0203 14:46:54.311255   13217 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.crt.31bdca25 ...
	I0203 14:46:54.311269   13217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.crt.31bdca25: {Name:mk044def5e2d4e3842d2222569e57852d7c82632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:46:54.311525   13217 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.key.31bdca25 ...
	I0203 14:46:54.311533   13217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.key.31bdca25: {Name:mk1806dc26f85fe11d6a45230894531cc8c626ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:46:54.311787   13217 certs.go:333] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.crt
	I0203 14:46:54.311949   13217 certs.go:337] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.key
	I0203 14:46:54.312099   13217 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.key
	I0203 14:46:54.312113   13217 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.crt with IP's: []
	I0203 14:46:54.531740   13217 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.crt ...
	I0203 14:46:54.531763   13217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.crt: {Name:mka66388a037c40504b3e27c22d9d94478b2a663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:46:54.532055   13217 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.key ...
	I0203 14:46:54.532063   13217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.key: {Name:mk33c4976906f7b3157e52da69ba82f1ed1f26a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:46:54.532440   13217 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem (1338 bytes)
	W0203 14:46:54.532488   13217 certs.go:397] ignoring /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568_empty.pem, impossibly tiny 0 bytes
	I0203 14:46:54.532499   13217 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem (1675 bytes)
	I0203 14:46:54.532532   13217 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem (1078 bytes)
	I0203 14:46:54.532565   13217 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem (1123 bytes)
	I0203 14:46:54.532598   13217 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem (1675 bytes)
	I0203 14:46:54.532669   13217 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem (1708 bytes)
	I0203 14:46:54.533176   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 14:46:54.551314   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 14:46:54.569455   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 14:46:54.586986   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 14:46:54.604476   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 14:46:54.622083   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 14:46:54.639658   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 14:46:54.656902   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 14:46:54.674414   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem --> /usr/share/ca-certificates/2568.pem (1338 bytes)
	I0203 14:46:54.691937   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /usr/share/ca-certificates/25682.pem (1708 bytes)
	I0203 14:46:54.709336   13217 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 14:46:54.726947   13217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 14:46:54.739879   13217 ssh_runner.go:195] Run: openssl version
	I0203 14:46:54.745694   13217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25682.pem && ln -fs /usr/share/ca-certificates/25682.pem /etc/ssl/certs/25682.pem"
	I0203 14:46:54.754105   13217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25682.pem
	I0203 14:46:54.758102   13217 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:13 /usr/share/ca-certificates/25682.pem
	I0203 14:46:54.758144   13217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25682.pem
	I0203 14:46:54.763819   13217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25682.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 14:46:54.772629   13217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 14:46:54.780836   13217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:46:54.784736   13217 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:46:54.784781   13217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:46:54.790232   13217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 14:46:54.800000   13217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2568.pem && ln -fs /usr/share/ca-certificates/2568.pem /etc/ssl/certs/2568.pem"
	I0203 14:46:54.809421   13217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2568.pem
	I0203 14:46:54.813815   13217 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:13 /usr/share/ca-certificates/2568.pem
	I0203 14:46:54.813877   13217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2568.pem
	I0203 14:46:54.820420   13217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2568.pem /etc/ssl/certs/51391683.0"
	I0203 14:46:54.828672   13217 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP:}
	I0203 14:46:54.828777   13217 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 14:46:54.851111   13217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 14:46:54.859406   13217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 14:46:54.867959   13217 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 14:46:54.868012   13217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 14:46:54.875646   13217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 14:46:54.875668   13217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 14:46:54.922862   13217 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0203 14:46:54.922921   13217 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 14:46:55.221485   13217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 14:46:55.221579   13217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 14:46:55.221648   13217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 14:46:55.450537   13217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 14:46:55.451209   13217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 14:46:55.457540   13217 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0203 14:46:55.521423   13217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 14:46:55.544419   13217 out.go:204]   - Generating certificates and keys ...
	I0203 14:46:55.544513   13217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 14:46:55.544581   13217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 14:46:55.876720   13217 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 14:46:56.009530   13217 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0203 14:46:56.219594   13217 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0203 14:46:56.309857   13217 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0203 14:46:56.554478   13217 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0203 14:46:56.554648   13217 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-759000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0203 14:46:56.865660   13217 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0203 14:46:56.865823   13217 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-759000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0203 14:46:57.201302   13217 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 14:46:57.453127   13217 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 14:46:57.530033   13217 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0203 14:46:57.530116   13217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 14:46:57.597170   13217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 14:46:57.662383   13217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 14:46:57.780192   13217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 14:46:57.865109   13217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 14:46:57.866112   13217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 14:46:57.909586   13217 out.go:204]   - Booting up control plane ...
	I0203 14:46:57.909702   13217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 14:46:57.909771   13217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 14:46:57.909838   13217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 14:46:57.909915   13217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 14:46:57.910033   13217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 14:47:37.875891   13217 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 14:47:37.876311   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:47:37.876485   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:47:42.878603   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:47:42.878925   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:47:52.879464   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:47:52.879627   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:48:12.880666   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:48:12.880844   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:48:52.882733   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:48:52.882899   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:48:52.882906   13217 kubeadm.go:322] 
	I0203 14:48:52.882935   13217 kubeadm.go:322] Unfortunately, an error has occurred:
	I0203 14:48:52.882961   13217 kubeadm.go:322] 	timed out waiting for the condition
	I0203 14:48:52.882967   13217 kubeadm.go:322] 
	I0203 14:48:52.882988   13217 kubeadm.go:322] This error is likely caused by:
	I0203 14:48:52.883025   13217 kubeadm.go:322] 	- The kubelet is not running
	I0203 14:48:52.883127   13217 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 14:48:52.883142   13217 kubeadm.go:322] 
	I0203 14:48:52.883246   13217 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 14:48:52.883289   13217 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0203 14:48:52.883333   13217 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0203 14:48:52.883339   13217 kubeadm.go:322] 
	I0203 14:48:52.883417   13217 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 14:48:52.883487   13217 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0203 14:48:52.883540   13217 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0203 14:48:52.883574   13217 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0203 14:48:52.883643   13217 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0203 14:48:52.883666   13217 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0203 14:48:52.886533   13217 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 14:48:52.886625   13217 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 14:48:52.886729   13217 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0203 14:48:52.886844   13217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 14:48:52.886939   13217 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 14:48:52.887028   13217 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0203 14:48:52.887187   13217 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-759000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-759000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-759000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-759000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 14:48:52.887238   13217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0203 14:48:53.339340   13217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 14:48:53.349469   13217 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 14:48:53.349527   13217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 14:48:53.358059   13217 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 14:48:53.358093   13217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 14:48:53.413524   13217 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0203 14:48:53.413575   13217 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 14:48:53.733546   13217 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 14:48:53.733637   13217 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 14:48:53.733752   13217 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 14:48:53.991642   13217 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 14:48:53.993688   13217 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 14:48:54.001376   13217 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0203 14:48:54.078790   13217 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 14:48:54.103263   13217 out.go:204]   - Generating certificates and keys ...
	I0203 14:48:54.103386   13217 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 14:48:54.103438   13217 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 14:48:54.103496   13217 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 14:48:54.103559   13217 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0203 14:48:54.103634   13217 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 14:48:54.103707   13217 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0203 14:48:54.103783   13217 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0203 14:48:54.103844   13217 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0203 14:48:54.103928   13217 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 14:48:54.104024   13217 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 14:48:54.104065   13217 kubeadm.go:322] [certs] Using the existing "sa" key
	I0203 14:48:54.104136   13217 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 14:48:54.287314   13217 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 14:48:54.653587   13217 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 14:48:54.719440   13217 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 14:48:54.868140   13217 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 14:48:54.868650   13217 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 14:48:54.889245   13217 out.go:204]   - Booting up control plane ...
	I0203 14:48:54.889355   13217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 14:48:54.889422   13217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 14:48:54.889477   13217 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 14:48:54.889545   13217 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 14:48:54.889670   13217 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 14:49:34.878484   13217 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 14:49:34.879122   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:49:34.879294   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:49:39.880043   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:49:39.880197   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:49:49.882264   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:49:49.882482   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:50:09.884838   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:50:09.885016   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:50:49.886660   13217 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 14:50:49.886809   13217 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 14:50:49.886819   13217 kubeadm.go:322] 
	I0203 14:50:49.886850   13217 kubeadm.go:322] Unfortunately, an error has occurred:
	I0203 14:50:49.886892   13217 kubeadm.go:322] 	timed out waiting for the condition
	I0203 14:50:49.886904   13217 kubeadm.go:322] 
	I0203 14:50:49.886945   13217 kubeadm.go:322] This error is likely caused by:
	I0203 14:50:49.886988   13217 kubeadm.go:322] 	- The kubelet is not running
	I0203 14:50:49.887083   13217 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 14:50:49.887090   13217 kubeadm.go:322] 
	I0203 14:50:49.887188   13217 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 14:50:49.887216   13217 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0203 14:50:49.887237   13217 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0203 14:50:49.887248   13217 kubeadm.go:322] 
	I0203 14:50:49.887321   13217 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 14:50:49.887403   13217 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0203 14:50:49.887475   13217 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0203 14:50:49.887521   13217 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0203 14:50:49.887583   13217 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0203 14:50:49.887610   13217 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0203 14:50:49.890569   13217 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 14:50:49.890640   13217 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 14:50:49.890738   13217 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0203 14:50:49.890827   13217 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 14:50:49.890912   13217 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 14:50:49.891000   13217 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0203 14:50:49.891026   13217 kubeadm.go:403] StartCluster complete in 3m55.057478982s
	I0203 14:50:49.891113   13217 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 14:50:49.916831   13217 logs.go:279] 0 containers: []
	W0203 14:50:49.916845   13217 logs.go:281] No container was found matching "kube-apiserver"
	I0203 14:50:49.916940   13217 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 14:50:49.941937   13217 logs.go:279] 0 containers: []
	W0203 14:50:49.941950   13217 logs.go:281] No container was found matching "etcd"
	I0203 14:50:49.942016   13217 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 14:50:49.965461   13217 logs.go:279] 0 containers: []
	W0203 14:50:49.965475   13217 logs.go:281] No container was found matching "coredns"
	I0203 14:50:49.965547   13217 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 14:50:49.990683   13217 logs.go:279] 0 containers: []
	W0203 14:50:49.990696   13217 logs.go:281] No container was found matching "kube-scheduler"
	I0203 14:50:49.990769   13217 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 14:50:50.013767   13217 logs.go:279] 0 containers: []
	W0203 14:50:50.013782   13217 logs.go:281] No container was found matching "kube-proxy"
	I0203 14:50:50.013854   13217 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 14:50:50.037528   13217 logs.go:279] 0 containers: []
	W0203 14:50:50.037541   13217 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 14:50:50.037608   13217 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 14:50:50.059957   13217 logs.go:279] 0 containers: []
	W0203 14:50:50.059973   13217 logs.go:281] No container was found matching "storage-provisioner"
	I0203 14:50:50.060042   13217 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 14:50:50.086494   13217 logs.go:279] 0 containers: []
	W0203 14:50:50.086510   13217 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 14:50:50.086519   13217 logs.go:124] Gathering logs for describe nodes ...
	I0203 14:50:50.086529   13217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 14:50:50.209746   13217 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 14:50:50.209756   13217 logs.go:124] Gathering logs for Docker ...
	I0203 14:50:50.209762   13217 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 14:50:50.227509   13217 logs.go:124] Gathering logs for container status ...
	I0203 14:50:50.227523   13217 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 14:50:52.281165   13217 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053587686s)
	I0203 14:50:52.281283   13217 logs.go:124] Gathering logs for kubelet ...
	I0203 14:50:52.281290   13217 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 14:50:52.330499   13217 logs.go:124] Gathering logs for dmesg ...
	I0203 14:50:52.330518   13217 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0203 14:50:52.346771   13217 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 14:50:52.346793   13217 out.go:239] * 
	* 
	W0203 14:50:52.346976   13217 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 14:50:52.347026   13217 out.go:239] * 
	* 
	W0203 14:50:52.347788   13217 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 14:50:52.431115   13217 out.go:177] 
	W0203 14:50:52.473274   13217 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 14:50:52.473367   13217 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 14:50:52.473410   13217 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 14:50:52.515124   13217 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-759000
E0203 14:50:53.035813    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-759000: (1.59941657s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-759000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-759000 status --format={{.Host}}: exit status 7 (120.617767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (4m40.651427388s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-759000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (581.036314ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-759000
	    minikube start -p kubernetes-upgrade-759000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7590002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-759000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 
E0203 14:55:53.042052    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (47.739138668s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-02-03 14:56:23.363268 -0800 PST m=+2922.081488135
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-759000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-759000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b48fcbc654bf354c40669c8f1c56dae1f1db1bd501741849771ea767d90d81bf",
	        "Created": "2023-02-03T22:46:49.296006418Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 202061,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T22:50:55.700425678Z",
	            "FinishedAt": "2023-02-03T22:50:53.062245254Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/b48fcbc654bf354c40669c8f1c56dae1f1db1bd501741849771ea767d90d81bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b48fcbc654bf354c40669c8f1c56dae1f1db1bd501741849771ea767d90d81bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/b48fcbc654bf354c40669c8f1c56dae1f1db1bd501741849771ea767d90d81bf/hosts",
	        "LogPath": "/var/lib/docker/containers/b48fcbc654bf354c40669c8f1c56dae1f1db1bd501741849771ea767d90d81bf/b48fcbc654bf354c40669c8f1c56dae1f1db1bd501741849771ea767d90d81bf-json.log",
	        "Name": "/kubernetes-upgrade-759000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-759000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-759000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/58a82dcdb6cc82034a77c0d01d4566bdb3f61f9d2a45520a5b61011a5baf07a2-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58a82dcdb6cc82034a77c0d01d4566bdb3f61f9d2a45520a5b61011a5baf07a2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58a82dcdb6cc82034a77c0d01d4566bdb3f61f9d2a45520a5b61011a5baf07a2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58a82dcdb6cc82034a77c0d01d4566bdb3f61f9d2a45520a5b61011a5baf07a2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-759000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-759000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-759000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-759000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-759000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "628b5c2e3d2e4e9ea43e3c9efc1b6de313d52ac55939e252773477f65e499598",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53274"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53275"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53271"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53272"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53273"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/628b5c2e3d2e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-759000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b48fcbc654bf",
	                        "kubernetes-upgrade-759000"
	                    ],
	                    "NetworkID": "4ee0c87bc106f8acdfb3cc769e478adbbdfa3779f6df44017ef7e9a703e1e9f6",
	                    "EndpointID": "472aa77c2d9e1494abc54579d0fc70cde369cecddbb60584c89f6cd4975a8cb4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-759000 -n kubernetes-upgrade-759000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-759000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-759000 logs -n 25: (2.767213137s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo cat                           | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo cat                           | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo cat                           | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo docker                        | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo cat                           | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo cat                           | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo cat                           | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo cat                           | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo                               | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo find                          | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-292000 sudo crio                          | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p flannel-292000                                    | flannel-292000            | jenkins | v1.29.0 | 03 Feb 23 14:55 PST | 03 Feb 23 14:55 PST |
	| start   | -p enable-default-cni-292000                         | enable-default-cni-292000 | jenkins | v1.29.0 | 03 Feb 23 14:55 PST |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 14:55:59
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 14:55:59.453182   16714 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:55:59.453329   16714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:55:59.453334   16714 out.go:309] Setting ErrFile to fd 2...
	I0203 14:55:59.453338   16714 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:55:59.453461   16714 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:55:59.453978   16714 out.go:303] Setting JSON to false
	I0203 14:55:59.472184   16714 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3334,"bootTime":1675461625,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 14:55:59.472277   16714 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 14:55:59.494519   16714 out.go:177] * [enable-default-cni-292000] minikube v1.29.0 on Darwin 13.2
	I0203 14:55:59.516469   16714 notify.go:220] Checking for updates...
	I0203 14:55:59.538342   16714 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 14:55:59.560122   16714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 14:55:59.581522   16714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 14:55:59.603476   16714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 14:55:59.624996   16714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 14:55:59.646425   16714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 14:55:59.669087   16714 config.go:180] Loaded profile config "kubernetes-upgrade-759000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 14:55:59.669191   16714 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 14:55:59.731421   16714 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 14:55:59.731561   16714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:55:59.876233   16714 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 22:55:59.781796528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:55:59.898170   16714 out.go:177] * Using the docker driver based on user configuration
	I0203 14:55:59.919990   16714 start.go:296] selected driver: docker
	I0203 14:55:59.920020   16714 start.go:857] validating driver "docker" against <nil>
	I0203 14:55:59.920037   16714 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 14:55:59.924148   16714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:56:00.066773   16714 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 22:55:59.974272238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:56:00.066911   16714 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	E0203 14:56:00.067078   16714 start_flags.go:457] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0203 14:56:00.067099   16714 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 14:56:00.089059   16714 out.go:177] * Using Docker Desktop driver with root privileges
	I0203 14:56:00.110401   16714 cni.go:84] Creating CNI manager for "bridge"
	I0203 14:56:00.110431   16714 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0203 14:56:00.110447   16714 start_flags.go:319] config:
	{Name:enable-default-cni-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:enable-default-cni-292000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:56:00.154680   16714 out.go:177] * Starting control plane node enable-default-cni-292000 in cluster enable-default-cni-292000
	I0203 14:56:00.175680   16714 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 14:56:00.197508   16714 out.go:177] * Pulling base image ...
	I0203 14:56:00.211894   16714 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 14:56:00.211944   16714 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 14:56:00.211982   16714 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0203 14:56:00.212007   16714 cache.go:57] Caching tarball of preloaded images
	I0203 14:56:00.212169   16714 preload.go:174] Found /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 14:56:00.212188   16714 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0203 14:56:00.212950   16714 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/config.json ...
	I0203 14:56:00.213042   16714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/config.json: {Name:mkfe5b243fea5b9a046b1d57792895aef26e3bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:00.269727   16714 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 14:56:00.269753   16714 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 14:56:00.269773   16714 cache.go:193] Successfully downloaded all kic artifacts
	I0203 14:56:00.269816   16714 start.go:364] acquiring machines lock for enable-default-cni-292000: {Name:mk5e8cafefe84734e7c823f54be4cd717c8f6013 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 14:56:00.269965   16714 start.go:368] acquired machines lock for "enable-default-cni-292000" in 137.149µs
	I0203 14:56:00.269993   16714 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:enable-default-cni-292000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 14:56:00.270065   16714 start.go:125] createHost starting for "" (driver="docker")
	I0203 14:55:55.763759   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:00.311484   16714 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0203 14:56:00.311712   16714 start.go:159] libmachine.API.Create for "enable-default-cni-292000" (driver="docker")
	I0203 14:56:00.311741   16714 client.go:168] LocalClient.Create starting
	I0203 14:56:00.311858   16714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem
	I0203 14:56:00.311909   16714 main.go:141] libmachine: Decoding PEM data...
	I0203 14:56:00.311928   16714 main.go:141] libmachine: Parsing certificate...
	I0203 14:56:00.311964   16714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem
	I0203 14:56:00.311995   16714 main.go:141] libmachine: Decoding PEM data...
	I0203 14:56:00.312003   16714 main.go:141] libmachine: Parsing certificate...
	I0203 14:56:00.312386   16714 cli_runner.go:164] Run: docker network inspect enable-default-cni-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0203 14:56:00.368654   16714 cli_runner.go:211] docker network inspect enable-default-cni-292000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0203 14:56:00.368762   16714 network_create.go:281] running [docker network inspect enable-default-cni-292000] to gather additional debugging logs...
	I0203 14:56:00.368797   16714 cli_runner.go:164] Run: docker network inspect enable-default-cni-292000
	W0203 14:56:00.427925   16714 cli_runner.go:211] docker network inspect enable-default-cni-292000 returned with exit code 1
	I0203 14:56:00.427957   16714 network_create.go:284] error running [docker network inspect enable-default-cni-292000]: docker network inspect enable-default-cni-292000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-292000
	I0203 14:56:00.427974   16714 network_create.go:286] output of [docker network inspect enable-default-cni-292000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-292000
	
	** /stderr **
	I0203 14:56:00.428066   16714 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 14:56:00.490531   16714 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0203 14:56:00.490928   16714 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001220880}
	I0203 14:56:00.490940   16714 network_create.go:123] attempt to create docker network enable-default-cni-292000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0203 14:56:00.491013   16714 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-292000 enable-default-cni-292000
	W0203 14:56:00.551634   16714 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-292000 enable-default-cni-292000 returned with exit code 1
	W0203 14:56:00.551681   16714 network_create.go:148] failed to create docker network enable-default-cni-292000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-292000 enable-default-cni-292000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0203 14:56:00.551705   16714 network_create.go:115] failed to create docker network enable-default-cni-292000 192.168.58.0/24, will retry: subnet is taken
	I0203 14:56:00.553182   16714 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0203 14:56:00.553574   16714 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001285150}
	I0203 14:56:00.553588   16714 network_create.go:123] attempt to create docker network enable-default-cni-292000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0203 14:56:00.553660   16714 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-292000 enable-default-cni-292000
	I0203 14:56:00.651037   16714 network_create.go:107] docker network enable-default-cni-292000 192.168.67.0/24 created
	I0203 14:56:00.651074   16714 kic.go:117] calculated static IP "192.168.67.2" for the "enable-default-cni-292000" container
	I0203 14:56:00.651177   16714 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0203 14:56:00.713173   16714 cli_runner.go:164] Run: docker volume create enable-default-cni-292000 --label name.minikube.sigs.k8s.io=enable-default-cni-292000 --label created_by.minikube.sigs.k8s.io=true
	I0203 14:56:00.774057   16714 oci.go:103] Successfully created a docker volume enable-default-cni-292000
	I0203 14:56:00.774240   16714 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-292000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-292000 --entrypoint /usr/bin/test -v enable-default-cni-292000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -d /var/lib
	I0203 14:56:01.225622   16714 oci.go:107] Successfully prepared a docker volume enable-default-cni-292000
	I0203 14:56:01.225648   16714 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 14:56:01.225662   16714 kic.go:190] Starting extracting preloaded images to volume ...
	I0203 14:56:01.225770   16714 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-292000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0203 14:56:00.764697   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0203 14:56:00.764727   16232 api_server.go:165] Checking apiserver status ...
	I0203 14:56:00.764777   16232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 14:56:00.777393   16232 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12845/cgroup
	W0203 14:56:00.787284   16232 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12845/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 14:56:00.787353   16232 ssh_runner.go:195] Run: ls
	I0203 14:56:00.792939   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:05.497985   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:05.498012   16232 retry.go:31] will retry after 242.214273ms: state is "Stopped"
	I0203 14:56:07.675452   16714 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v enable-default-cni-292000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.44947255s)
	I0203 14:56:07.675473   16714 kic.go:199] duration metric: took 6.449676 seconds to extract preloaded images to volume
	I0203 14:56:07.675572   16714 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0203 14:56:07.818334   16714 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname enable-default-cni-292000 --name enable-default-cni-292000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-292000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=enable-default-cni-292000 --network enable-default-cni-292000 --ip 192.168.67.2 --volume enable-default-cni-292000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8
	I0203 14:56:08.176448   16714 cli_runner.go:164] Run: docker container inspect enable-default-cni-292000 --format={{.State.Running}}
	I0203 14:56:08.239342   16714 cli_runner.go:164] Run: docker container inspect enable-default-cni-292000 --format={{.State.Status}}
	I0203 14:56:08.307356   16714 cli_runner.go:164] Run: docker exec enable-default-cni-292000 stat /var/lib/dpkg/alternatives/iptables
	I0203 14:56:08.423855   16714 oci.go:144] the created container "enable-default-cni-292000" has a running status.
	I0203 14:56:08.423892   16714 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/enable-default-cni-292000/id_rsa...
	I0203 14:56:08.627323   16714 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/enable-default-cni-292000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0203 14:56:08.732636   16714 cli_runner.go:164] Run: docker container inspect enable-default-cni-292000 --format={{.State.Status}}
	I0203 14:56:08.789671   16714 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0203 14:56:08.789691   16714 kic_runner.go:114] Args: [docker exec --privileged enable-default-cni-292000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0203 14:56:08.904001   16714 cli_runner.go:164] Run: docker container inspect enable-default-cni-292000 --format={{.State.Status}}
	I0203 14:56:08.963516   16714 machine.go:88] provisioning docker machine ...
	I0203 14:56:08.963564   16714 ubuntu.go:169] provisioning hostname "enable-default-cni-292000"
	I0203 14:56:08.963676   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:09.021588   16714 main.go:141] libmachine: Using SSH client type: native
	I0203 14:56:09.021798   16714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 54036 <nil> <nil>}
	I0203 14:56:09.021812   16714 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-292000 && echo "enable-default-cni-292000" | sudo tee /etc/hostname
	I0203 14:56:09.160682   16714 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-292000
	
	I0203 14:56:09.160785   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:09.219776   16714 main.go:141] libmachine: Using SSH client type: native
	I0203 14:56:09.219941   16714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 54036 <nil> <nil>}
	I0203 14:56:09.219953   16714 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-292000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-292000/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-292000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 14:56:09.351018   16714 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 14:56:09.351037   16714 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
	I0203 14:56:09.351063   16714 ubuntu.go:177] setting up certificates
	I0203 14:56:09.351069   16714 provision.go:83] configureAuth start
	I0203 14:56:09.351143   16714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-292000
	I0203 14:56:09.408729   16714 provision.go:138] copyHostCerts
	I0203 14:56:09.408824   16714 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
	I0203 14:56:09.408838   16714 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 14:56:09.408937   16714 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
	I0203 14:56:09.409116   16714 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
	I0203 14:56:09.409122   16714 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 14:56:09.409184   16714 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
	I0203 14:56:09.409320   16714 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
	I0203 14:56:09.409335   16714 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 14:56:09.409400   16714 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
	I0203 14:56:09.409509   16714 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-292000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube enable-default-cni-292000]
	I0203 14:56:05.742270   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:05.743949   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:05.743985   16232 retry.go:31] will retry after 300.724609ms: state is "Stopped"
	I0203 14:56:06.044802   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:06.048540   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:06.048570   16232 retry.go:31] will retry after 427.113882ms: state is "Stopped"
	I0203 14:56:06.476057   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:06.477795   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:06.477819   16232 retry.go:31] will retry after 382.2356ms: state is "Stopped"
	I0203 14:56:06.860265   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:06.862350   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:06.862370   16232 retry.go:31] will retry after 505.529557ms: state is "Stopped"
	I0203 14:56:07.368121   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:07.369759   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:07.369782   16232 retry.go:31] will retry after 609.195524ms: state is "Stopped"
	I0203 14:56:07.979034   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:07.980211   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:07.980233   16232 retry.go:31] will retry after 858.741692ms: state is "Stopped"
	I0203 14:56:08.839533   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:08.841381   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:08.841426   16232 retry.go:31] will retry after 1.201160326s: state is "Stopped"
	I0203 14:56:10.042686   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:10.043877   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:10.043895   16232 retry.go:31] will retry after 1.723796097s: state is "Stopped"
	I0203 14:56:09.502084   16714 provision.go:172] copyRemoteCerts
	I0203 14:56:09.502153   16714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 14:56:09.502206   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:09.562303   16714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54036 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/enable-default-cni-292000/id_rsa Username:docker}
	I0203 14:56:09.655302   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 14:56:09.672796   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0203 14:56:09.690132   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 14:56:09.707790   16714 provision.go:86] duration metric: configureAuth took 356.700437ms
	I0203 14:56:09.707804   16714 ubuntu.go:193] setting minikube options for container-runtime
	I0203 14:56:09.707956   16714 config.go:180] Loaded profile config "enable-default-cni-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 14:56:09.708014   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:09.766783   16714 main.go:141] libmachine: Using SSH client type: native
	I0203 14:56:09.766950   16714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 54036 <nil> <nil>}
	I0203 14:56:09.766967   16714 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 14:56:09.899088   16714 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 14:56:09.899102   16714 ubuntu.go:71] root file system type: overlay
	I0203 14:56:09.899281   16714 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 14:56:09.899373   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:09.957338   16714 main.go:141] libmachine: Using SSH client type: native
	I0203 14:56:09.957512   16714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 54036 <nil> <nil>}
	I0203 14:56:09.957584   16714 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 14:56:10.096415   16714 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 14:56:10.096505   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:10.154237   16714 main.go:141] libmachine: Using SSH client type: native
	I0203 14:56:10.154405   16714 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 54036 <nil> <nil>}
	I0203 14:56:10.154418   16714 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 14:56:10.781744   16714 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:56:10.092715842 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0203 14:56:10.781770   16714 machine.go:91] provisioned docker machine in 1.81819371s
	I0203 14:56:10.781778   16714 client.go:171] LocalClient.Create took 10.469814161s
	I0203 14:56:10.781806   16714 start.go:167] duration metric: libmachine.API.Create for "enable-default-cni-292000" took 10.469874365s
	I0203 14:56:10.781816   16714 start.go:300] post-start starting for "enable-default-cni-292000" (driver="docker")
	I0203 14:56:10.781822   16714 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 14:56:10.781904   16714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 14:56:10.781991   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:10.842786   16714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54036 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/enable-default-cni-292000/id_rsa Username:docker}
	I0203 14:56:10.934961   16714 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 14:56:10.938603   16714 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 14:56:10.938619   16714 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 14:56:10.938627   16714 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 14:56:10.938632   16714 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 14:56:10.938649   16714 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
	I0203 14:56:10.938759   16714 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
	I0203 14:56:10.938933   16714 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
	I0203 14:56:10.939128   16714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 14:56:10.946614   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
	I0203 14:56:10.964515   16714 start.go:303] post-start completed in 182.680751ms
	I0203 14:56:10.965064   16714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-292000
	I0203 14:56:11.021741   16714 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/config.json ...
	I0203 14:56:11.022174   16714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 14:56:11.022230   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:11.080430   16714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54036 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/enable-default-cni-292000/id_rsa Username:docker}
	I0203 14:56:11.170396   16714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 14:56:11.175083   16714 start.go:128] duration metric: createHost completed in 10.904784443s
	I0203 14:56:11.175101   16714 start.go:83] releasing machines lock for "enable-default-cni-292000", held for 10.904901521s
	I0203 14:56:11.175183   16714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" enable-default-cni-292000
	I0203 14:56:11.234059   16714 ssh_runner.go:195] Run: cat /version.json
	I0203 14:56:11.234066   16714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 14:56:11.234147   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:11.234156   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:11.296769   16714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54036 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/enable-default-cni-292000/id_rsa Username:docker}
	I0203 14:56:11.296925   16714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54036 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/enable-default-cni-292000/id_rsa Username:docker}
	I0203 14:56:11.442952   16714 ssh_runner.go:195] Run: systemctl --version
	I0203 14:56:11.448131   16714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 14:56:11.453372   16714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 14:56:11.473273   16714 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 14:56:11.473383   16714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 14:56:11.480929   16714 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0203 14:56:11.493659   16714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 14:56:11.508050   16714 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0203 14:56:11.508068   16714 start.go:483] detecting cgroup driver to use...
	I0203 14:56:11.508079   16714 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 14:56:11.508185   16714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 14:56:11.521505   16714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0203 14:56:11.529925   16714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 14:56:11.538526   16714 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 14:56:11.538588   16714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 14:56:11.547313   16714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 14:56:11.555673   16714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 14:56:11.564608   16714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 14:56:11.573061   16714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 14:56:11.581138   16714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 14:56:11.589796   16714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 14:56:11.597027   16714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 14:56:11.604161   16714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 14:56:11.678251   16714 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 14:56:11.746683   16714 start.go:483] detecting cgroup driver to use...
	I0203 14:56:11.746701   16714 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 14:56:11.746771   16714 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 14:56:11.761453   16714 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 14:56:11.761521   16714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 14:56:11.772230   16714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 14:56:11.787368   16714 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 14:56:11.883508   16714 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 14:56:11.981925   16714 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 14:56:11.981941   16714 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 14:56:11.995832   16714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 14:56:12.082939   16714 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 14:56:12.290559   16714 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 14:56:12.356593   16714 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 14:56:12.428701   16714 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 14:56:12.496673   16714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 14:56:12.561775   16714 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 14:56:12.572982   16714 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 14:56:12.573070   16714 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 14:56:12.577098   16714 start.go:551] Will wait 60s for crictl version
	I0203 14:56:12.577143   16714 ssh_runner.go:195] Run: which crictl
	I0203 14:56:12.580727   16714 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 14:56:12.682800   16714 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0203 14:56:12.682877   16714 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 14:56:12.711615   16714 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 14:56:12.781015   16714 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0203 14:56:12.781280   16714 cli_runner.go:164] Run: docker exec -t enable-default-cni-292000 dig +short host.docker.internal
	I0203 14:56:12.892976   16714 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0203 14:56:12.893094   16714 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0203 14:56:12.897531   16714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 14:56:12.907736   16714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-292000
	I0203 14:56:12.965503   16714 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 14:56:12.965587   16714 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 14:56:12.989390   16714 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 14:56:12.989412   16714 docker.go:560] Images already preloaded, skipping extraction
	I0203 14:56:12.989500   16714 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 14:56:13.013164   16714 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0203 14:56:13.013187   16714 cache_images.go:84] Images are preloaded, skipping loading
	I0203 14:56:13.013283   16714 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 14:56:13.081391   16714 cni.go:84] Creating CNI manager for "bridge"
	I0203 14:56:13.081416   16714 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 14:56:13.081432   16714 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-292000 NodeName:enable-default-cni-292000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 14:56:13.081593   16714 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "enable-default-cni-292000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 14:56:13.081678   16714 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=enable-default-cni-292000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:enable-default-cni-292000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0203 14:56:13.081743   16714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0203 14:56:13.089808   16714 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 14:56:13.089862   16714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 14:56:13.097220   16714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (457 bytes)
	I0203 14:56:13.110738   16714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 14:56:13.123593   16714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0203 14:56:13.136438   16714 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0203 14:56:13.140279   16714 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 14:56:13.150179   16714 certs.go:56] Setting up /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000 for IP: 192.168.67.2
	I0203 14:56:13.150198   16714 certs.go:186] acquiring lock for shared ca certs: {Name:mkdec04c6cc16ac0dcab0ae849b602e6c1942576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:13.150379   16714 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key
	I0203 14:56:13.150441   16714 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key
	I0203 14:56:13.150485   16714 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.key
	I0203 14:56:13.150498   16714 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt with IP's: []
	I0203 14:56:13.271891   16714 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt ...
	I0203 14:56:13.271909   16714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: {Name:mkcc6b8f0ea4a26c759d14054aef19278984ed6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:13.272262   16714 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.key ...
	I0203 14:56:13.272274   16714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.key: {Name:mkae69d3900acf4cfd3bc3778e50bee50d4ef1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:13.272500   16714 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.key.c7fa3a9e
	I0203 14:56:13.272518   16714 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0203 14:56:13.591741   16714 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.crt.c7fa3a9e ...
	I0203 14:56:13.591760   16714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.crt.c7fa3a9e: {Name:mkfdaccd23871781143032d082ec31db10b37551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:13.592070   16714 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.key.c7fa3a9e ...
	I0203 14:56:13.592078   16714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.key.c7fa3a9e: {Name:mkf41998659c1f34dc37eb47ac6ae3db4d8d5ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:13.592253   16714 certs.go:333] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.crt
	I0203 14:56:13.592427   16714 certs.go:337] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.key
	I0203 14:56:13.592585   16714 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/proxy-client.key
	I0203 14:56:13.592599   16714 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/proxy-client.crt with IP's: []
	I0203 14:56:13.626675   16714 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/proxy-client.crt ...
	I0203 14:56:13.626682   16714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/proxy-client.crt: {Name:mk930e7bdd10bec5c466dd6e5042b8cc3bd4f35d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:13.626880   16714 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/proxy-client.key ...
	I0203 14:56:13.626888   16714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/proxy-client.key: {Name:mkc63ccd7ca062879d245ec23ec0c602726b1a7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:13.627253   16714 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem (1338 bytes)
	W0203 14:56:13.627300   16714 certs.go:397] ignoring /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568_empty.pem, impossibly tiny 0 bytes
	I0203 14:56:13.627310   16714 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem (1675 bytes)
	I0203 14:56:13.627343   16714 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem (1078 bytes)
	I0203 14:56:13.627372   16714 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem (1123 bytes)
	I0203 14:56:13.627400   16714 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem (1675 bytes)
	I0203 14:56:13.627468   16714 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem (1708 bytes)
	I0203 14:56:13.627957   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 14:56:13.646199   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 14:56:13.663725   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 14:56:13.681079   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 14:56:13.698427   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 14:56:13.715665   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 14:56:13.733311   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 14:56:13.750515   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 14:56:13.767795   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 14:56:13.785458   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem --> /usr/share/ca-certificates/2568.pem (1338 bytes)
	I0203 14:56:13.802901   16714 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /usr/share/ca-certificates/25682.pem (1708 bytes)
	I0203 14:56:13.820335   16714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 14:56:13.833389   16714 ssh_runner.go:195] Run: openssl version
	I0203 14:56:13.838995   16714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 14:56:13.847234   16714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:56:13.851274   16714 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:56:13.851319   16714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 14:56:13.856948   16714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 14:56:13.865534   16714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2568.pem && ln -fs /usr/share/ca-certificates/2568.pem /etc/ssl/certs/2568.pem"
	I0203 14:56:13.874790   16714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2568.pem
	I0203 14:56:13.879136   16714 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:13 /usr/share/ca-certificates/2568.pem
	I0203 14:56:13.879178   16714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2568.pem
	I0203 14:56:13.884502   16714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2568.pem /etc/ssl/certs/51391683.0"
	I0203 14:56:13.892807   16714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25682.pem && ln -fs /usr/share/ca-certificates/25682.pem /etc/ssl/certs/25682.pem"
	I0203 14:56:13.901357   16714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25682.pem
	I0203 14:56:13.905404   16714 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:13 /usr/share/ca-certificates/25682.pem
	I0203 14:56:13.905452   16714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25682.pem
	I0203 14:56:13.910877   16714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25682.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 14:56:13.918918   16714 kubeadm.go:401] StartCluster: {Name:enable-default-cni-292000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:enable-default-cni-292000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:56:13.919028   16714 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 14:56:13.941990   16714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 14:56:13.949934   16714 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 14:56:13.957684   16714 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 14:56:13.957733   16714 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 14:56:13.965254   16714 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 14:56:13.965275   16714 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 14:56:14.014516   16714 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0203 14:56:14.014561   16714 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 14:56:14.117519   16714 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 14:56:14.117634   16714 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 14:56:14.117789   16714 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 14:56:14.246335   16714 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 14:56:14.268176   16714 out.go:204]   - Generating certificates and keys ...
	I0203 14:56:14.268269   16714 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 14:56:14.268342   16714 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 14:56:14.386906   16714 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 14:56:11.767788   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:11.769120   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:11.769144   16232 retry.go:31] will retry after 1.596532639s: state is "Stopped"
	I0203 14:56:13.366129   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:13.367660   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:13.367679   16232 retry.go:31] will retry after 2.189373114s: state is "Stopped"
	I0203 14:56:15.557182   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:15.559082   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:15.559113   16232 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0203 14:56:15.559127   16232 kubeadm.go:1120] stopping kube-system containers ...
	I0203 14:56:15.559215   16232 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 14:56:15.587163   16232 docker.go:456] Stopping containers: [9491d4daf8ff b293481fd48e 03ce19bda41b 703fa0f5ea89 5de1e98a76c3 0809e0b9ce25 bf82fb65a91c 98a0d1f23423 fb36ac1ef095 c6c7111feeb3 9af42e5a0815 dcc951d0920d 380e6597458b c882a0c0286c fb15ce47a7c1 4e7873f1eb15 d862c82c03c2 ecddfde127e3]
	I0203 14:56:15.587250   16232 ssh_runner.go:195] Run: docker stop 9491d4daf8ff b293481fd48e 03ce19bda41b 703fa0f5ea89 5de1e98a76c3 0809e0b9ce25 bf82fb65a91c 98a0d1f23423 fb36ac1ef095 c6c7111feeb3 9af42e5a0815 dcc951d0920d 380e6597458b c882a0c0286c fb15ce47a7c1 4e7873f1eb15 d862c82c03c2 ecddfde127e3
	I0203 14:56:14.623655   16714 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0203 14:56:14.911853   16714 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0203 14:56:14.972658   16714 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0203 14:56:15.204327   16714 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0203 14:56:15.204478   16714 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-292000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0203 14:56:15.428715   16714 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0203 14:56:15.428924   16714 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-292000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0203 14:56:15.501596   16714 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 14:56:15.652081   16714 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 14:56:15.711669   16714 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0203 14:56:15.712352   16714 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 14:56:15.943391   16714 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 14:56:16.175433   16714 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 14:56:16.281108   16714 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 14:56:16.400480   16714 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 14:56:16.411439   16714 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 14:56:16.412201   16714 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 14:56:16.412260   16714 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0203 14:56:16.480818   16714 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 14:56:16.503368   16714 out.go:204]   - Booting up control plane ...
	I0203 14:56:16.503485   16714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 14:56:16.503572   16714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 14:56:16.503651   16714 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 14:56:16.503782   16714 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 14:56:16.503937   16714 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 14:56:15.790059   16232 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 14:56:15.829988   16232 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 14:56:15.838508   16232 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb  3 22:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb  3 22:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Feb  3 22:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Feb  3 22:55 /etc/kubernetes/scheduler.conf
	
	I0203 14:56:15.838568   16232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 14:56:15.846681   16232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 14:56:15.854733   16232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 14:56:15.862334   16232 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 14:56:15.862383   16232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 14:56:15.869833   16232 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 14:56:15.877356   16232 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 14:56:15.877408   16232 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 14:56:15.885194   16232 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 14:56:15.893369   16232 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0203 14:56:15.893382   16232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 14:56:15.951979   16232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 14:56:16.464746   16232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 14:56:16.608640   16232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 14:56:16.681128   16232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 14:56:16.782253   16232 api_server.go:51] waiting for apiserver process to appear ...
	I0203 14:56:16.782408   16232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 14:56:17.297665   16232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 14:56:17.795765   16232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 14:56:17.867914   16232 api_server.go:71] duration metric: took 1.08563783s to wait for apiserver process to appear ...
	I0203 14:56:17.867937   16232 api_server.go:87] waiting for apiserver healthz status ...
	I0203 14:56:17.867950   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:17.869317   16232 api_server.go:268] stopped: https://127.0.0.1:53273/healthz: Get "https://127.0.0.1:53273/healthz": EOF
	I0203 14:56:18.370173   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:20.709111   16232 api_server.go:278] https://127.0.0.1:53273/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 14:56:20.709132   16232 api_server.go:102] status: https://127.0.0.1:53273/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 14:56:20.871456   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:20.877223   16232 api_server.go:278] https://127.0.0.1:53273/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 14:56:20.877237   16232 api_server.go:102] status: https://127.0.0.1:53273/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 14:56:21.369515   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:21.374770   16232 api_server.go:278] https://127.0.0.1:53273/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 14:56:21.374781   16232 api_server.go:102] status: https://127.0.0.1:53273/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 14:56:21.869515   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:21.874197   16232 api_server.go:278] https://127.0.0.1:53273/healthz returned 200:
	ok
	I0203 14:56:21.880622   16232 api_server.go:140] control plane version: v1.26.1
	I0203 14:56:21.880637   16232 api_server.go:130] duration metric: took 4.012610411s to wait for apiserver health ...
	I0203 14:56:21.880644   16232 cni.go:84] Creating CNI manager for ""
	I0203 14:56:21.880653   16232 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 14:56:21.902537   16232 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 14:56:21.924163   16232 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 14:56:21.934270   16232 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0203 14:56:21.947184   16232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 14:56:21.953437   16232 system_pods.go:59] 5 kube-system pods found
	I0203 14:56:21.953453   16232 system_pods.go:61] "etcd-kubernetes-upgrade-759000" [726ebeae-d842-4ebf-99c9-4dc6bbdc16c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 14:56:21.953457   16232 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-759000" [c6589d4d-a8a8-4d9b-8e03-7330abcc7987] Running
	I0203 14:56:21.953468   16232 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-759000" [e59946ef-c56b-4b67-9aa3-288baebeed26] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 14:56:21.953475   16232 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-759000" [7e04268f-63c8-42f4-9d1e-2d4cc5c70bf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 14:56:21.953480   16232 system_pods.go:61] "storage-provisioner" [a04f901a-20ef-451d-b692-fe20c053a58c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0203 14:56:21.953484   16232 system_pods.go:74] duration metric: took 6.28974ms to wait for pod list to return data ...
	I0203 14:56:21.953491   16232 node_conditions.go:102] verifying NodePressure condition ...
	I0203 14:56:21.956973   16232 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0203 14:56:21.956987   16232 node_conditions.go:123] node cpu capacity is 6
	I0203 14:56:21.957017   16232 node_conditions.go:105] duration metric: took 3.520589ms to run NodePressure ...
	I0203 14:56:21.957041   16232 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 14:56:22.092946   16232 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 14:56:22.100283   16232 ops.go:34] apiserver oom_adj: -16
	I0203 14:56:22.100296   16232 kubeadm.go:637] restartCluster took 37.674397451s
	I0203 14:56:22.100303   16232 kubeadm.go:403] StartCluster complete in 37.714114586s
	I0203 14:56:22.100316   16232 settings.go:142] acquiring lock: {Name:mk82a7d24fccbbf9730201facefdc9acc345e8e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:22.100403   16232 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 14:56:22.100853   16232 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/kubeconfig: {Name:mkf113f45b09a6304f4248a99f0e16d42a3468fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:56:22.101103   16232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 14:56:22.101129   16232 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0203 14:56:22.101192   16232 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-759000"
	I0203 14:56:22.101196   16232 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-759000"
	I0203 14:56:22.101206   16232 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-759000"
	W0203 14:56:22.101212   16232 addons.go:236] addon storage-provisioner should already be in state true
	I0203 14:56:22.101229   16232 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-759000"
	I0203 14:56:22.101241   16232 config.go:180] Loaded profile config "kubernetes-upgrade-759000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 14:56:22.101243   16232 host.go:66] Checking if "kubernetes-upgrade-759000" exists ...
	I0203 14:56:22.101524   16232 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-759000 --format={{.State.Status}}
	I0203 14:56:22.101566   16232 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-759000 --format={{.State.Status}}
	I0203 14:56:22.101549   16232 kapi.go:59] client config for kubernetes-upgrade-759000: &rest.Config{Host:"https://127.0.0.1:53273", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.key", CAFile:"/Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2451f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 14:56:22.107045   16232 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-759000" context rescaled to 1 replicas
	I0203 14:56:22.107075   16232 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 14:56:22.144281   16232 out.go:177] * Verifying Kubernetes components...
	I0203 14:56:22.188490   16232 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0203 14:56:22.201932   16232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 14:56:22.235024   16232 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0203 14:56:22.214713   16232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:56:22.214674   16232 kapi.go:59] client config for kubernetes-upgrade-759000: &rest.Config{Host:"https://127.0.0.1:53273", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubernetes-upgrade-759000/client.key", CAFile:"/Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2451f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0203 14:56:22.256388   16232 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 14:56:22.256413   16232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 14:56:22.257145   16232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:56:22.267543   16232 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-759000"
	W0203 14:56:22.267564   16232 addons.go:236] addon default-storageclass should already be in state true
	I0203 14:56:22.267579   16232 host.go:66] Checking if "kubernetes-upgrade-759000" exists ...
	I0203 14:56:22.267942   16232 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-759000 --format={{.State.Status}}
	I0203 14:56:22.327038   16232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53274 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0203 14:56:22.327977   16232 api_server.go:51] waiting for apiserver process to appear ...
	I0203 14:56:22.328043   16232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 14:56:22.335655   16232 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 14:56:22.335667   16232 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 14:56:22.335776   16232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-759000
	I0203 14:56:22.341258   16232 api_server.go:71] duration metric: took 234.128055ms to wait for apiserver process to appear ...
	I0203 14:56:22.341288   16232 api_server.go:87] waiting for apiserver healthz status ...
	I0203 14:56:22.341306   16232 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53273/healthz ...
	I0203 14:56:22.347492   16232 api_server.go:278] https://127.0.0.1:53273/healthz returned 200:
	ok
	I0203 14:56:22.349133   16232 api_server.go:140] control plane version: v1.26.1
	I0203 14:56:22.349143   16232 api_server.go:130] duration metric: took 7.850051ms to wait for apiserver health ...
	I0203 14:56:22.349163   16232 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 14:56:22.354250   16232 system_pods.go:59] 5 kube-system pods found
	I0203 14:56:22.354283   16232 system_pods.go:61] "etcd-kubernetes-upgrade-759000" [726ebeae-d842-4ebf-99c9-4dc6bbdc16c0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 14:56:22.354293   16232 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-759000" [c6589d4d-a8a8-4d9b-8e03-7330abcc7987] Running
	I0203 14:56:22.354308   16232 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-759000" [e59946ef-c56b-4b67-9aa3-288baebeed26] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0203 14:56:22.354321   16232 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-759000" [7e04268f-63c8-42f4-9d1e-2d4cc5c70bf6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 14:56:22.354330   16232 system_pods.go:61] "storage-provisioner" [a04f901a-20ef-451d-b692-fe20c053a58c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0203 14:56:22.354336   16232 system_pods.go:74] duration metric: took 5.169123ms to wait for pod list to return data ...
	I0203 14:56:22.354346   16232 kubeadm.go:578] duration metric: took 247.24517ms to wait for : map[apiserver:true system_pods:true] ...
	I0203 14:56:22.354356   16232 node_conditions.go:102] verifying NodePressure condition ...
	I0203 14:56:22.357566   16232 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0203 14:56:22.357578   16232 node_conditions.go:123] node cpu capacity is 6
	I0203 14:56:22.357587   16232 node_conditions.go:105] duration metric: took 3.226493ms to run NodePressure ...
	I0203 14:56:22.357595   16232 start.go:228] waiting for startup goroutines ...
	I0203 14:56:22.400086   16232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53274 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0203 14:56:22.428517   16232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 14:56:22.515989   16232 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 14:56:23.207328   16232 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0203 14:56:23.248544   16232 addons.go:492] enable addons completed in 1.147353128s: enabled=[storage-provisioner default-storageclass]
	I0203 14:56:23.248586   16232 start.go:233] waiting for cluster config update ...
	I0203 14:56:23.248614   16232 start.go:240] writing updated cluster config ...
	I0203 14:56:23.249321   16232 ssh_runner.go:195] Run: rm -f paused
	I0203 14:56:23.289401   16232 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0203 14:56:23.311337   16232 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-759000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-03 22:50:55 UTC, end at Fri 2023-02-03 22:56:24 UTC. --
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.857349515Z" level=info msg="Starting up"
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.859349751Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.859393666Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.859420113Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.859429100Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.860568523Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.860609708Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.860623398Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.860629871Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 03 22:55:42 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:42.871696945Z" level=info msg="Loading containers: start."
	Feb 03 22:55:43 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:43.000129743Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 03 22:55:43 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:43.046478605Z" level=info msg="Loading containers: done."
	Feb 03 22:55:43 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:43.067020430Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Feb 03 22:55:43 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:43.067102320Z" level=info msg="Daemon has completed initialization"
	Feb 03 22:55:43 kubernetes-upgrade-759000 systemd[1]: Started Docker Application Container Engine.
	Feb 03 22:55:43 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:43.094922580Z" level=info msg="API listen on [::]:2376"
	Feb 03 22:55:43 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:55:43.099735115Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 03 22:56:05 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:56:05.515622798Z" level=info msg="ignoring event" container=703fa0f5ea897c8b347d5ab0c4de751a48e1ca79c076498e4fde76b5bc98059a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:56:15 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:56:15.655634410Z" level=info msg="ignoring event" container=5de1e98a76c3c52c03292ebecd034dafffcb660fa4bd12440b1411a5a655b909 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:56:15 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:56:15.670542522Z" level=info msg="ignoring event" container=0809e0b9ce2502f283d0cc0ab362d0aa2047bff9e8c4f0ab13be92effe13f99f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:56:15 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:56:15.672197665Z" level=info msg="ignoring event" container=b293481fd48efe8fe1fa139bdfaead4c08aa68b16be2f9fc6f8e8fe3255f8756 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:56:15 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:56:15.677090392Z" level=info msg="ignoring event" container=bf82fb65a91cdceb2a27c8102a3c24c2f113e61523b22037fe54bb32940ebc60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:56:15 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:56:15.683130729Z" level=info msg="ignoring event" container=9491d4daf8ffdb17677155c86afce6371e0a7ba1c39150c66cde22fdfb485665 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:56:15 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:56:15.688538542Z" level=info msg="ignoring event" container=03ce19bda41b0297c865fd56372617783d137cd632792d097432ce7c479e482d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 03 22:56:15 kubernetes-upgrade-759000 dockerd[12068]: time="2023-02-03T22:56:15.692473674Z" level=info msg="ignoring event" container=98a0d1f23423f8812997a6339ba5cd8a196f08f03fa4e6a44c7330213bde2fbe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	44105a93204a4       fce326961ae2d       7 seconds ago       Running             etcd                      3                   85d7efb8f1086
	e27c02eb9a0cb       deb04688c4a35       7 seconds ago       Running             kube-apiserver            2                   862c29e77a6f6
	8d4db0af98975       655493523f607       7 seconds ago       Running             kube-scheduler            3                   8a6ccd90782b2
	528a4eacada2f       e9c08e11b07f6       7 seconds ago       Running             kube-controller-manager   3                   f6d83e7e422ef
	9491d4daf8ffd       fce326961ae2d       19 seconds ago      Exited              etcd                      2                   5de1e98a76c3c
	b293481fd48ef       e9c08e11b07f6       24 seconds ago      Exited              kube-controller-manager   2                   bf82fb65a91cd
	03ce19bda41b0       655493523f607       32 seconds ago      Exited              kube-scheduler            2                   98a0d1f23423f
	703fa0f5ea897       deb04688c4a35       40 seconds ago      Exited              kube-apiserver            1                   0809e0b9ce250
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-759000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-759000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b839c677c13f941c936975b72b386dd12a345761
	                    minikube.k8s.io/name=kubernetes-upgrade-759000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_03T14_55_33_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Feb 2023 22:55:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-759000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Feb 2023 22:56:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Feb 2023 22:56:20 +0000   Fri, 03 Feb 2023 22:55:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Feb 2023 22:56:20 +0000   Fri, 03 Feb 2023 22:55:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Feb 2023 22:56:20 +0000   Fri, 03 Feb 2023 22:55:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Feb 2023 22:56:20 +0000   Fri, 03 Feb 2023 22:55:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-759000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4c0b538bb934883b9b745615631a0cd
	  System UUID:                b4c0b538bb934883b9b745615631a0cd
	  Boot ID:                    1da703b4-de02-410a-8cc8-a1231caca873
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-759000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         51s
	  kube-system                 kube-apiserver-kubernetes-upgrade-759000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-759000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-scheduler-kubernetes-upgrade-759000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  58s (x4 over 58s)  kubelet  Node kubernetes-upgrade-759000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x4 over 58s)  kubelet  Node kubernetes-upgrade-759000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x4 over 58s)  kubelet  Node kubernetes-upgrade-759000 status is now: NodeHasSufficientPID
	  Normal  Starting                 51s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  51s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  51s                kubelet  Node kubernetes-upgrade-759000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s                kubelet  Node kubernetes-upgrade-759000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s                kubelet  Node kubernetes-upgrade-759000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                50s                kubelet  Node kubernetes-upgrade-759000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Feb 3 22:17] FS-Cache: N-key=[8] '738a9e0400000000'
	[  +3.012853] FS-Cache: Duplicate cookie detected
	[  +0.000079] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000058] FS-Cache: O-cookie d=00000000c7cee421{9p.inode} n=000000009ebaef5e
	[  +0.000108] FS-Cache: O-key=[8] '728a9e0400000000'
	[  +0.000077] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000106] FS-Cache: N-cookie d=00000000c7cee421{9p.inode} n=0000000051e864d1
	[  +0.000110] FS-Cache: N-key=[8] '728a9e0400000000'
	[  +0.401234] FS-Cache: Duplicate cookie detected
	[  +0.000036] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000117] FS-Cache: O-cookie d=00000000c7cee421{9p.inode} n=00000000920e2302
	[  +0.000043] FS-Cache: O-key=[8] '908a9e0400000000'
	[  +0.000035] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000065] FS-Cache: N-cookie d=00000000c7cee421{9p.inode} n=000000005149b4d9
	[  +0.000193] FS-Cache: N-key=[8] '908a9e0400000000'
	
	* 
	* ==> etcd [44105a93204a] <==
	* {"level":"info","ts":"2023-02-03T22:56:17.889Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-03T22:56:17.889Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-02-03T22:56:17.890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-02-03T22:56:17.890Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-02-03T22:56:17.890Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-03T22:56:17.890Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-03T22:56:17.892Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-03T22:56:17.892Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-03T22:56:17.892Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-03T22:56:17.892Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-03T22:56:17.892Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-03T22:56:19.682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-02-03T22:56:19.682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-02-03T22:56:19.682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-03T22:56:19.682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-02-03T22:56:19.682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-02-03T22:56:19.682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-02-03T22:56:19.682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-02-03T22:56:19.684Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-759000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-03T22:56:19.684Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:56:19.685Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:56:19.685Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-03T22:56:19.685Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-03T22:56:19.686Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-02-03T22:56:19.686Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [9491d4daf8ff] <==
	* {"level":"info","ts":"2023-02-03T22:56:05.908Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-03T22:56:05.908Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-03T22:56:05.908Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-03T22:56:05.908Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-03T22:56:05.908Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-03T22:56:06.901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-03T22:56:06.901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-03T22:56:06.901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-02-03T22:56:06.901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-02-03T22:56:06.901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-03T22:56:06.901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-02-03T22:56:06.901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-02-03T22:56:06.902Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-759000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-03T22:56:06.902Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:56:06.902Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-03T22:56:06.902Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-03T22:56:06.902Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-03T22:56:06.903Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-03T22:56:06.904Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-02-03T22:56:15.623Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-02-03T22:56:15.623Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-759000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-02-03T22:56:15.627Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-02-03T22:56:15.629Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-03T22:56:15.630Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-02-03T22:56:15.630Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-759000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:56:25 up 55 min,  0 users,  load average: 1.74, 1.57, 1.40
	Linux kubernetes-upgrade-759000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [703fa0f5ea89] <==
	* W0203 22:56:00.950600       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0203 22:56:01.369271       1 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0203 22:56:02.427065       1 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	E0203 22:56:05.489634       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [e27c02eb9a0c] <==
	* I0203 22:56:20.693558       1 autoregister_controller.go:141] Starting autoregister controller
	I0203 22:56:20.693561       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0203 22:56:20.693684       1 controller.go:121] Starting legacy_token_tracking_controller
	I0203 22:56:20.693718       1 shared_informer.go:273] Waiting for caches to sync for configmaps
	I0203 22:56:20.693758       1 apf_controller.go:361] Starting API Priority and Fairness config controller
	I0203 22:56:20.693942       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0203 22:56:20.693951       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	E0203 22:56:20.723536       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0203 22:56:20.759331       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0203 22:56:20.792888       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0203 22:56:20.793036       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0203 22:56:20.793659       1 cache.go:39] Caches are synced for autoregister controller
	I0203 22:56:20.793711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0203 22:56:20.793783       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0203 22:56:20.793791       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0203 22:56:20.793838       1 shared_informer.go:280] Caches are synced for configmaps
	I0203 22:56:20.794067       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0203 22:56:20.856073       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0203 22:56:21.514659       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0203 22:56:21.699407       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0203 22:56:22.030083       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0203 22:56:22.036252       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0203 22:56:22.063916       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0203 22:56:22.079257       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0203 22:56:22.084250       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [528a4eacada2] <==
	* I0203 22:56:23.229110       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates
	I0203 22:56:23.229189       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	W0203 22:56:23.229228       1 shared_informer.go:550] resyncPeriod 12h58m23.600104026s is smaller than resyncCheckPeriod 20h18m59.593462301s and the informer has already started. Changing it to 20h18m59.593462301s
	I0203 22:56:23.229285       1 controllermanager.go:622] Started "resourcequota"
	I0203 22:56:23.229354       1 resource_quota_controller.go:277] Starting resource quota controller
	I0203 22:56:23.229434       1 shared_informer.go:273] Waiting for caches to sync for resource quota
	I0203 22:56:23.229461       1 resource_quota_monitor.go:295] QuotaMonitor running
	I0203 22:56:23.370867       1 controllermanager.go:622] Started "garbagecollector"
	I0203 22:56:23.370930       1 garbagecollector.go:154] Starting garbage collector controller
	I0203 22:56:23.370940       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0203 22:56:23.370957       1 graph_builder.go:291] GraphBuilder running
	I0203 22:56:23.421692       1 controllermanager.go:622] Started "csrcleaner"
	I0203 22:56:23.421767       1 cleaner.go:82] Starting CSR cleaner controller
	I0203 22:56:23.520531       1 controllermanager.go:622] Started "persistentvolume-expander"
	I0203 22:56:23.520573       1 expand_controller.go:340] Starting expand controller
	I0203 22:56:23.520640       1 shared_informer.go:273] Waiting for caches to sync for expand
	I0203 22:56:23.621756       1 controllermanager.go:622] Started "replicationcontroller"
	I0203 22:56:23.621834       1 replica_set.go:201] Starting replicationcontroller controller
	I0203 22:56:23.621840       1 shared_informer.go:273] Waiting for caches to sync for ReplicationController
	I0203 22:56:23.671812       1 controllermanager.go:622] Started "csrapproving"
	I0203 22:56:23.671840       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0203 22:56:23.671853       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving
	I0203 22:56:23.721180       1 controllermanager.go:622] Started "ttl"
	I0203 22:56:23.721210       1 ttl_controller.go:120] Starting TTL controller
	I0203 22:56:23.721220       1 shared_informer.go:273] Waiting for caches to sync for TTL
	
	* 
	* ==> kube-controller-manager [b293481fd48e] <==
	* I0203 22:56:00.590293       1 serving.go:348] Generated self-signed cert in-memory
	I0203 22:56:00.808998       1 controllermanager.go:182] Version: v1.26.1
	I0203 22:56:00.809049       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 22:56:00.809977       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0203 22:56:00.810017       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0203 22:56:00.810019       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0203 22:56:00.810104       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [03ce19bda41b] <==
	* W0203 22:56:12.491990       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:12.492044       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:12.796308       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:12.796334       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.76.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:13.286230       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:13.286297       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:13.360341       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:13.360394       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.76.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:13.540705       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:13.540753       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.76.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:13.656184       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:13.656238       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.76.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:14.443326       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:14.443377       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.76.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:14.715815       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:14.715866       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.76.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:14.896837       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:14.896889       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.76.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0203 22:56:15.504041       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0203 22:56:15.504087       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.76.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	I0203 22:56:15.626357       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0203 22:56:15.626498       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0203 22:56:15.626910       1 shared_informer.go:276] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 22:56:15.626940       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0203 22:56:15.627036       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [8d4db0af9897] <==
	* I0203 22:56:18.491403       1 serving.go:348] Generated self-signed cert in-memory
	W0203 22:56:20.711217       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0203 22:56:20.711300       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0203 22:56:20.711313       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0203 22:56:20.711319       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0203 22:56:20.757413       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0203 22:56:20.757490       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0203 22:56:20.758829       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0203 22:56:20.758964       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0203 22:56:20.759013       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0203 22:56:20.759047       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0203 22:56:20.859758       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-03 22:50:55 UTC, end at Fri 2023-02-03 22:56:26 UTC. --
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.057630   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f908adea9796b19a9647c81fa6d6aa07-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-759000\" (UID: \"f908adea9796b19a9647c81fa6d6aa07\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.057662   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f908adea9796b19a9647c81fa6d6aa07-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-759000\" (UID: \"f908adea9796b19a9647c81fa6d6aa07\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.073718   13520 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: E0203 22:56:17.074003   13520 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.158203   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb159b22b24253ca1a6713cec82ba793-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-759000\" (UID: \"eb159b22b24253ca1a6713cec82ba793\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.158321   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb159b22b24253ca1a6713cec82ba793-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-759000\" (UID: \"eb159b22b24253ca1a6713cec82ba793\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.158350   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb159b22b24253ca1a6713cec82ba793-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-759000\" (UID: \"eb159b22b24253ca1a6713cec82ba793\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.158466   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/8e24507b6200163c997bd9eabb2bf9ee-etcd-data\") pod \"etcd-kubernetes-upgrade-759000\" (UID: \"8e24507b6200163c997bd9eabb2bf9ee\") " pod="kube-system/etcd-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.158631   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e58aaf28364bae857d44ce1c3b2e4cf2-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-759000\" (UID: \"e58aaf28364bae857d44ce1c3b2e4cf2\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.158871   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb159b22b24253ca1a6713cec82ba793-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-759000\" (UID: \"eb159b22b24253ca1a6713cec82ba793\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.158955   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb159b22b24253ca1a6713cec82ba793-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-759000\" (UID: \"eb159b22b24253ca1a6713cec82ba793\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.159070   13520 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/8e24507b6200163c997bd9eabb2bf9ee-etcd-certs\") pod \"etcd-kubernetes-upgrade-759000\" (UID: \"8e24507b6200163c997bd9eabb2bf9ee\") " pod="kube-system/etcd-kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: E0203 22:56:17.358366   13520 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-759000?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.485324   13520 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: E0203 22:56:17.486135   13520 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-759000"
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: W0203 22:56:17.522437   13520 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: E0203 22:56:17.522569   13520 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Feb 03 22:56:17 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:17.978777   13520 status_manager.go:698] "Failed to get status for pod" podUID=eb159b22b24253ca1a6713cec82ba793 pod="kube-system/kube-apiserver-kubernetes-upgrade-759000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-759000\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Feb 03 22:56:18 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:18.295481   13520 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-759000"
	Feb 03 22:56:20 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:20.707854   13520 apiserver.go:52] "Watching apiserver"
	Feb 03 22:56:20 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:20.717059   13520 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 03 22:56:20 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:20.785171   13520 reconciler.go:41] "Reconciler: start to sync state"
	Feb 03 22:56:20 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:20.908164   13520 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-759000"
	Feb 03 22:56:20 kubernetes-upgrade-759000 kubelet[13520]: I0203 22:56:20.908256   13520 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-759000"
	Feb 03 22:56:21 kubernetes-upgrade-759000 kubelet[13520]: E0203 22:56:21.114137   13520 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-759000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-759000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-759000 -n kubernetes-upgrade-759000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-759000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-759000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-759000 describe pod storage-provisioner: exit status 1 (60.391628ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-759000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-759000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-759000: (2.9513181s)
--- FAIL: TestKubernetesUpgrade (588.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (54.59s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2978645850.exe start -p missing-upgrade-912000 --memory=2200 --driver=docker 
E0203 14:45:53.029507    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:46:10.683838    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2978645850.exe start -p missing-upgrade-912000 --memory=2200 --driver=docker : exit status 78 (40.691216263s)

                                                
                                                
-- stdout --
	* [missing-upgrade-912000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-912000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-912000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 39.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 61.37 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 213.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 257.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 279.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 324.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 347.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 368.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 450.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 493.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 516.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 538.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:46:06.660928874 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-912000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:46:26.133535974 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2978645850.exe start -p missing-upgrade-912000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2978645850.exe start -p missing-upgrade-912000 --memory=2200 --driver=docker : exit status 70 (3.932092759s)

                                                
                                                
-- stdout --
	* [missing-upgrade-912000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-912000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-912000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2978645850.exe start -p missing-upgrade-912000 --memory=2200 --driver=docker 
E0203 14:46:35.655537    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:35.660623    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:35.671252    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:35.693309    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:35.734344    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:35.815471    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:35.976261    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:36.296453    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:36.936594    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 14:46:38.216883    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2978645850.exe start -p missing-upgrade-912000 --memory=2200 --driver=docker : exit status 70 (3.92840625s)

                                                
                                                
-- stdout --
	* [missing-upgrade-912000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-912000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-912000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-02-03 14:46:38.645996 -0800 PST m=+2337.376371029
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-912000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-912000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f272e0557ecdb6cfe1cf628f6fac25e816ac1ec9610d960033bdc9e5334fa5e4",
	        "Created": "2023-02-03T22:46:14.821446038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 175028,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T22:46:15.048075054Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/f272e0557ecdb6cfe1cf628f6fac25e816ac1ec9610d960033bdc9e5334fa5e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f272e0557ecdb6cfe1cf628f6fac25e816ac1ec9610d960033bdc9e5334fa5e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f272e0557ecdb6cfe1cf628f6fac25e816ac1ec9610d960033bdc9e5334fa5e4/hosts",
	        "LogPath": "/var/lib/docker/containers/f272e0557ecdb6cfe1cf628f6fac25e816ac1ec9610d960033bdc9e5334fa5e4/f272e0557ecdb6cfe1cf628f6fac25e816ac1ec9610d960033bdc9e5334fa5e4-json.log",
	        "Name": "/missing-upgrade-912000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-912000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f755c5f88823a4e1307d0f96d1f9d05c5e9a744e175e4db14c41b93c6ecaa2cb-init/diff:/var/lib/docker/overlay2/0e45eea7f3fb4962af92006f1e50e7e1da5c85efa57d6aa3026f0ceb6e570b13/diff:/var/lib/docker/overlay2/c4a202f224a13cbb1a3c83e83a9a87b0fee6291f1aa9044b2bd01f7977c702fe/diff:/var/lib/docker/overlay2/b42f579467ea0803828df2cb72a179577a360ffc0a043910d0b1b0ab083b1773/diff:/var/lib/docker/overlay2/2eb7e4f1831bd2b2aac8391fb5f73c949b5b7d0a99cdd12e902d50aaf06c5cd2/diff:/var/lib/docker/overlay2/a12c9308abebef887cfaffb957c3dedda7b18bf2f4bec1d2b757a38b571a49f5/diff:/var/lib/docker/overlay2/8dded86ab9bfc2e181766326dfc1228a773720c621ef760a5943b059a74b5382/diff:/var/lib/docker/overlay2/0f9ed804492884efd49f2d26ebcf8a4af978522ae9c03128eff86109dabb8a7e/diff:/var/lib/docker/overlay2/dc13b340ca01b6f458386eb447441c8ab4fd38217e83efec290e3e258a5f127a/diff:/var/lib/docker/overlay2/476224c17de9ec09306385aa99af28a3dcca086e06168e8ff795796b08209bec/diff:/var/lib/docker/overlay2/c31373
437066fa8cb8716806dd01edd6f166098662b75b09a1401ad1e82de00b/diff:/var/lib/docker/overlay2/8a90b043c23a109c365402618d64f0bc61c99600a5f33f59fc23aa397ef7359d/diff:/var/lib/docker/overlay2/acc163d177a8160322a6263a046bdf4b27fec8a6338c413a1a9b6cead1df053e/diff:/var/lib/docker/overlay2/6fdb9b7b2a0a20ad1e74d64834c0ca968548b83c2b9dc0a6102d76cc40fc73c1/diff:/var/lib/docker/overlay2/1fc3b3f057ad56bd36d87c66e13d2eb3f8d2f8d42b78f994a41190966398230d/diff:/var/lib/docker/overlay2/7c77adf70fdd0620f690efce220c3c7cf524af3c35c26fe756c8594a4d8661cf/diff:/var/lib/docker/overlay2/99e3af7f7732d41e329ccbd3d67d8012be36ee1a30cb8a3333f8c3ba9d1bc2c6/diff:/var/lib/docker/overlay2/acdc6195f10a56c56c1d1ac87e2109fe9858322fecdb507fb88ed23a6acfd210/diff:/var/lib/docker/overlay2/c1a5824ac19243cc33ef6fc824d95ff7d32ab972f633a808667f84945c179ba0/diff:/var/lib/docker/overlay2/18e84590ec3ac1be497fcfb52de9ce1c04a8888ffc87279fcf7d7bd1a4547ef9/diff:/var/lib/docker/overlay2/46d5e1b43a5e1732c6b3a3c8cd84333e267a4742f32950d149a92508fcbad55f/diff:/var/lib/d
ocker/overlay2/54befe217e5b1fd508e83940924934465ce90d988a724bcc5a560957ff01e649/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f755c5f88823a4e1307d0f96d1f9d05c5e9a744e175e4db14c41b93c6ecaa2cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f755c5f88823a4e1307d0f96d1f9d05c5e9a744e175e4db14c41b93c6ecaa2cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f755c5f88823a4e1307d0f96d1f9d05c5e9a744e175e4db14c41b93c6ecaa2cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-912000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-912000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-912000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-912000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-912000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39bf352e86345d7b854ccc22fbfdb9a5606904d0cc839295f4926b84358bbf08",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52853"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52854"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52855"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/39bf352e8634",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "3faa5186cd933accba130cc0916383ae2fca9760ff8dbea5904c2d67103cae8b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "83bb67533b6070e1c8f957427f8c719b1a829c4c7551ecb7db2a7401a6fee8e7",
	                    "EndpointID": "3faa5186cd933accba130cc0916383ae2fca9760ff8dbea5904c2d67103cae8b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-912000 -n missing-upgrade-912000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-912000 -n missing-upgrade-912000: exit status 6 (388.491969ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 14:46:39.082563   13179 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-912000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-912000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-912000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-912000
E0203 14:46:40.777199    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-912000: (2.342796872s)
--- FAIL: TestMissingContainerUpgrade (54.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (53.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2418196931.exe start -p stopped-upgrade-915000 --memory=2200 --vm-driver=docker 
E0203 14:47:57.583828    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2418196931.exe start -p stopped-upgrade-915000 --memory=2200 --vm-driver=docker : exit status 70 (41.158191512s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-915000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1928310048
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:47:50.522234742 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-915000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:48:10.209623840 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-915000", then "minikube start -p stopped-upgrade-915000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 7.78 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.53 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 95.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 139.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 206.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 249.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 271.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 293.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 315.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 337.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 360.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 381.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 422.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 466.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 503.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 524.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 22:48:10.209623840 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2418196931.exe start -p stopped-upgrade-915000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2418196931.exe start -p stopped-upgrade-915000 --memory=2200 --vm-driver=docker : exit status 70 (4.415155208s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-915000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig821567844
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-915000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2418196931.exe start -p stopped-upgrade-915000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2418196931.exe start -p stopped-upgrade-915000 --memory=2200 --vm-driver=docker : exit status 70 (4.305959975s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-915000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2178514423
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-915000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (53.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (251.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-136000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-136000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m11.262122154s)

                                                
                                                
-- stdout --
	* [old-k8s-version-136000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-136000 in cluster old-k8s-version-136000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 15:01:03.904411   19862 out.go:296] Setting OutFile to fd 1 ...
	I0203 15:01:03.904578   19862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:01:03.904583   19862 out.go:309] Setting ErrFile to fd 2...
	I0203 15:01:03.904587   19862 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:01:03.904699   19862 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 15:01:03.905236   19862 out.go:303] Setting JSON to false
	I0203 15:01:03.923534   19862 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3638,"bootTime":1675461625,"procs":380,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 15:01:03.923622   19862 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 15:01:03.945758   19862 out.go:177] * [old-k8s-version-136000] minikube v1.29.0 on Darwin 13.2
	I0203 15:01:03.988429   19862 notify.go:220] Checking for updates...
	I0203 15:01:04.010159   19862 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 15:01:04.031430   19862 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:01:04.052753   19862 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 15:01:04.074565   19862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 15:01:04.096365   19862 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 15:01:04.117561   19862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 15:01:04.140215   19862 config.go:180] Loaded profile config "false-292000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 15:01:04.140332   19862 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 15:01:04.203489   19862 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 15:01:04.203615   19862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:01:04.348511   19862 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:01:04.254011652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:01:04.370492   19862 out.go:177] * Using the docker driver based on user configuration
	I0203 15:01:04.392408   19862 start.go:296] selected driver: docker
	I0203 15:01:04.392438   19862 start.go:857] validating driver "docker" against <nil>
	I0203 15:01:04.392456   19862 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 15:01:04.396471   19862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:01:04.540788   19862 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:01:04.446030372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:01:04.540915   19862 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0203 15:01:04.541101   19862 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 15:01:04.563043   19862 out.go:177] * Using Docker Desktop driver with root privileges
	I0203 15:01:04.584469   19862 cni.go:84] Creating CNI manager for ""
	I0203 15:01:04.584507   19862 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 15:01:04.584520   19862 start_flags.go:319] config:
	{Name:old-k8s-version-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:01:04.628541   19862 out.go:177] * Starting control plane node old-k8s-version-136000 in cluster old-k8s-version-136000
	I0203 15:01:04.649495   19862 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 15:01:04.670530   19862 out.go:177] * Pulling base image ...
	I0203 15:01:04.712565   19862 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 15:01:04.712619   19862 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 15:01:04.712659   19862 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0203 15:01:04.712675   19862 cache.go:57] Caching tarball of preloaded images
	I0203 15:01:04.712885   19862 preload.go:174] Found /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 15:01:04.712906   19862 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0203 15:01:04.713917   19862 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/config.json ...
	I0203 15:01:04.714071   19862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/config.json: {Name:mk790fa0e2925daea627be1ef43250ed624d5ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:01:04.769795   19862 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 15:01:04.769810   19862 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 15:01:04.769830   19862 cache.go:193] Successfully downloaded all kic artifacts
	I0203 15:01:04.769926   19862 start.go:364] acquiring machines lock for old-k8s-version-136000: {Name:mk6d4a37aad431df09b59c262f13f34239bde2da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 15:01:04.770074   19862 start.go:368] acquired machines lock for "old-k8s-version-136000" in 135.335µs
	I0203 15:01:04.770099   19862 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-136000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 15:01:04.770157   19862 start.go:125] createHost starting for "" (driver="docker")
	I0203 15:01:04.813911   19862 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0203 15:01:04.814408   19862 start.go:159] libmachine.API.Create for "old-k8s-version-136000" (driver="docker")
	I0203 15:01:04.814473   19862 client.go:168] LocalClient.Create starting
	I0203 15:01:04.814718   19862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem
	I0203 15:01:04.814804   19862 main.go:141] libmachine: Decoding PEM data...
	I0203 15:01:04.814836   19862 main.go:141] libmachine: Parsing certificate...
	I0203 15:01:04.814933   19862 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem
	I0203 15:01:04.814994   19862 main.go:141] libmachine: Decoding PEM data...
	I0203 15:01:04.815009   19862 main.go:141] libmachine: Parsing certificate...
	I0203 15:01:04.815868   19862 cli_runner.go:164] Run: docker network inspect old-k8s-version-136000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0203 15:01:04.871506   19862 cli_runner.go:211] docker network inspect old-k8s-version-136000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0203 15:01:04.871601   19862 network_create.go:281] running [docker network inspect old-k8s-version-136000] to gather additional debugging logs...
	I0203 15:01:04.871628   19862 cli_runner.go:164] Run: docker network inspect old-k8s-version-136000
	W0203 15:01:04.926856   19862 cli_runner.go:211] docker network inspect old-k8s-version-136000 returned with exit code 1
	I0203 15:01:04.926886   19862 network_create.go:284] error running [docker network inspect old-k8s-version-136000]: docker network inspect old-k8s-version-136000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-136000
	I0203 15:01:04.926896   19862 network_create.go:286] output of [docker network inspect old-k8s-version-136000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-136000
	
	** /stderr **
	I0203 15:01:04.926989   19862 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0203 15:01:04.984433   19862 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0203 15:01:04.985136   19862 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001248660}
	I0203 15:01:04.985332   19862 network_create.go:123] attempt to create docker network old-k8s-version-136000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0203 15:01:04.985424   19862 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-136000 old-k8s-version-136000
	W0203 15:01:05.040342   19862 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-136000 old-k8s-version-136000 returned with exit code 1
	W0203 15:01:05.040374   19862 network_create.go:148] failed to create docker network old-k8s-version-136000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-136000 old-k8s-version-136000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0203 15:01:05.040395   19862 network_create.go:115] failed to create docker network old-k8s-version-136000 192.168.58.0/24, will retry: subnet is taken
	I0203 15:01:05.041872   19862 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0203 15:01:05.042185   19862 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fc7eb0}
	I0203 15:01:05.042195   19862 network_create.go:123] attempt to create docker network old-k8s-version-136000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0203 15:01:05.042266   19862 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-136000 old-k8s-version-136000
	I0203 15:01:05.129984   19862 network_create.go:107] docker network old-k8s-version-136000 192.168.67.0/24 created
	I0203 15:01:05.130028   19862 kic.go:117] calculated static IP "192.168.67.2" for the "old-k8s-version-136000" container
	I0203 15:01:05.130160   19862 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0203 15:01:05.186491   19862 cli_runner.go:164] Run: docker volume create old-k8s-version-136000 --label name.minikube.sigs.k8s.io=old-k8s-version-136000 --label created_by.minikube.sigs.k8s.io=true
	I0203 15:01:05.241890   19862 oci.go:103] Successfully created a docker volume old-k8s-version-136000
	I0203 15:01:05.242036   19862 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-136000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-136000 --entrypoint /usr/bin/test -v old-k8s-version-136000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -d /var/lib
	I0203 15:01:05.706911   19862 oci.go:107] Successfully prepared a docker volume old-k8s-version-136000
	I0203 15:01:05.706938   19862 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 15:01:05.706953   19862 kic.go:190] Starting extracting preloaded images to volume ...
	I0203 15:01:05.707080   19862 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-136000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir
	I0203 15:01:11.692968   19862 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-136000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.985693089s)
	I0203 15:01:11.692988   19862 kic.go:199] duration metric: took 5.985911 seconds to extract preloaded images to volume
	I0203 15:01:11.693092   19862 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0203 15:01:11.835319   19862 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-136000 --name old-k8s-version-136000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-136000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-136000 --network old-k8s-version-136000 --ip 192.168.67.2 --volume old-k8s-version-136000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8
	I0203 15:01:12.195277   19862 cli_runner.go:164] Run: docker container inspect old-k8s-version-136000 --format={{.State.Running}}
	I0203 15:01:12.255838   19862 cli_runner.go:164] Run: docker container inspect old-k8s-version-136000 --format={{.State.Status}}
	I0203 15:01:12.321819   19862 cli_runner.go:164] Run: docker exec old-k8s-version-136000 stat /var/lib/dpkg/alternatives/iptables
	I0203 15:01:12.443806   19862 oci.go:144] the created container "old-k8s-version-136000" has a running status.
	I0203 15:01:12.443837   19862 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa...
	I0203 15:01:12.606759   19862 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0203 15:01:12.734044   19862 cli_runner.go:164] Run: docker container inspect old-k8s-version-136000 --format={{.State.Status}}
	I0203 15:01:12.798725   19862 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0203 15:01:12.798748   19862 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-136000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0203 15:01:12.922032   19862 cli_runner.go:164] Run: docker container inspect old-k8s-version-136000 --format={{.State.Status}}
	I0203 15:01:12.987845   19862 machine.go:88] provisioning docker machine ...
	I0203 15:01:12.987905   19862 ubuntu.go:169] provisioning hostname "old-k8s-version-136000"
	I0203 15:01:12.988026   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:13.052801   19862 main.go:141] libmachine: Using SSH client type: native
	I0203 15:01:13.053008   19862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55132 <nil> <nil>}
	I0203 15:01:13.053020   19862 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-136000 && echo "old-k8s-version-136000" | sudo tee /etc/hostname
	I0203 15:01:13.195071   19862 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-136000
	
	I0203 15:01:13.195168   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:13.257534   19862 main.go:141] libmachine: Using SSH client type: native
	I0203 15:01:13.257715   19862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55132 <nil> <nil>}
	I0203 15:01:13.257733   19862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-136000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-136000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-136000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 15:01:13.385205   19862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:01:13.385229   19862 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
	I0203 15:01:13.385254   19862 ubuntu.go:177] setting up certificates
	I0203 15:01:13.385272   19862 provision.go:83] configureAuth start
	I0203 15:01:13.385354   19862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136000
	I0203 15:01:13.445028   19862 provision.go:138] copyHostCerts
	I0203 15:01:13.445121   19862 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
	I0203 15:01:13.445128   19862 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 15:01:13.445277   19862 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
	I0203 15:01:13.445511   19862 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
	I0203 15:01:13.445517   19862 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 15:01:13.445593   19862 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
	I0203 15:01:13.445752   19862 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
	I0203 15:01:13.445758   19862 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 15:01:13.445824   19862 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
	I0203 15:01:13.445937   19862 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-136000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-136000]
	I0203 15:01:13.648041   19862 provision.go:172] copyRemoteCerts
	I0203 15:01:13.648099   19862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 15:01:13.648150   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:13.708333   19862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55132 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:01:13.799073   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 15:01:13.816941   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0203 15:01:13.836231   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 15:01:13.854607   19862 provision.go:86] duration metric: configureAuth took 469.312672ms
	I0203 15:01:13.854620   19862 ubuntu.go:193] setting minikube options for container-runtime
	I0203 15:01:13.854767   19862 config.go:180] Loaded profile config "old-k8s-version-136000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0203 15:01:13.854835   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:13.943052   19862 main.go:141] libmachine: Using SSH client type: native
	I0203 15:01:13.943192   19862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55132 <nil> <nil>}
	I0203 15:01:13.943208   19862 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 15:01:14.072694   19862 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 15:01:14.072708   19862 ubuntu.go:71] root file system type: overlay
	I0203 15:01:14.072957   19862 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 15:01:14.073064   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:14.136496   19862 main.go:141] libmachine: Using SSH client type: native
	I0203 15:01:14.136687   19862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55132 <nil> <nil>}
	I0203 15:01:14.136750   19862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 15:01:14.271153   19862 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 15:01:14.271253   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:14.335290   19862 main.go:141] libmachine: Using SSH client type: native
	I0203 15:01:14.335437   19862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55132 <nil> <nil>}
	I0203 15:01:14.335449   19862 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 15:01:14.970874   19862 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-03 23:01:14.268819802 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0203 15:01:14.970900   19862 machine.go:91] provisioned docker machine in 1.982974492s
	I0203 15:01:14.970913   19862 client.go:171] LocalClient.Create took 10.156216583s
	I0203 15:01:14.970935   19862 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-136000" took 10.156318936s
	I0203 15:01:14.970942   19862 start.go:300] post-start starting for "old-k8s-version-136000" (driver="docker")
	I0203 15:01:14.970946   19862 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 15:01:14.971021   19862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 15:01:14.971096   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:15.036412   19862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55132 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:01:15.130005   19862 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 15:01:15.134057   19862 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 15:01:15.134075   19862 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 15:01:15.134082   19862 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 15:01:15.134086   19862 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 15:01:15.134097   19862 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
	I0203 15:01:15.134218   19862 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
	I0203 15:01:15.134394   19862 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
	I0203 15:01:15.134596   19862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 15:01:15.142398   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:01:15.159787   19862 start.go:303] post-start completed in 188.828643ms
	I0203 15:01:15.160294   19862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136000
	I0203 15:01:15.220370   19862 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/config.json ...
	I0203 15:01:15.220911   19862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 15:01:15.220999   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:15.281557   19862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55132 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:01:15.372411   19862 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 15:01:15.377695   19862 start.go:128] duration metric: createHost completed in 10.60730173s
	I0203 15:01:15.377718   19862 start.go:83] releasing machines lock for "old-k8s-version-136000", held for 10.607414291s
	I0203 15:01:15.377823   19862 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136000
	I0203 15:01:15.446571   19862 ssh_runner.go:195] Run: cat /version.json
	I0203 15:01:15.446606   19862 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0203 15:01:15.446677   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:15.446686   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:15.520897   19862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55132 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:01:15.520906   19862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55132 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:01:15.805895   19862 ssh_runner.go:195] Run: systemctl --version
	I0203 15:01:15.810937   19862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 15:01:15.816451   19862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 15:01:15.839770   19862 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 15:01:15.839852   19862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0203 15:01:15.856786   19862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0203 15:01:15.866169   19862 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0203 15:01:15.866189   19862 start.go:483] detecting cgroup driver to use...
	I0203 15:01:15.866203   19862 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:01:15.866350   19862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:01:15.882007   19862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0203 15:01:15.891878   19862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 15:01:15.902509   19862 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 15:01:15.902582   19862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 15:01:15.913482   19862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:01:15.923202   19862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 15:01:15.932774   19862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:01:15.942169   19862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 15:01:15.951062   19862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 15:01:15.960597   19862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 15:01:15.970706   19862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 15:01:15.979276   19862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:01:16.051611   19862 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 15:01:16.126446   19862 start.go:483] detecting cgroup driver to use...
	I0203 15:01:16.126467   19862 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:01:16.126549   19862 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 15:01:16.142787   19862 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 15:01:16.142855   19862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 15:01:16.156405   19862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:01:16.170796   19862 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 15:01:16.275182   19862 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 15:01:16.375162   19862 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 15:01:16.375179   19862 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 15:01:16.391556   19862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:01:16.489323   19862 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 15:01:16.733325   19862 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:01:16.768102   19862 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:01:16.842909   19862 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0203 15:01:16.843137   19862 cli_runner.go:164] Run: docker exec -t old-k8s-version-136000 dig +short host.docker.internal
	I0203 15:01:16.966977   19862 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0203 15:01:16.967115   19862 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0203 15:01:16.972308   19862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:01:16.983843   19862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:01:17.043620   19862 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 15:01:17.043702   19862 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 15:01:17.069439   19862 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0203 15:01:17.069456   19862 docker.go:560] Images already preloaded, skipping extraction
	I0203 15:01:17.069544   19862 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 15:01:17.095069   19862 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0203 15:01:17.095093   19862 cache_images.go:84] Images are preloaded, skipping loading
	I0203 15:01:17.095177   19862 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 15:01:17.170817   19862 cni.go:84] Creating CNI manager for ""
	I0203 15:01:17.170834   19862 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 15:01:17.170854   19862 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 15:01:17.170870   19862 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-136000 NodeName:old-k8s-version-136000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 15:01:17.170984   19862 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-136000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-136000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 15:01:17.171052   19862 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-136000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0203 15:01:17.171101   19862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0203 15:01:17.180980   19862 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 15:01:17.181052   19862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 15:01:17.190150   19862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0203 15:01:17.205131   19862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 15:01:17.220170   19862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0203 15:01:17.235298   19862 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0203 15:01:17.239682   19862 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:01:17.251307   19862 certs.go:56] Setting up /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000 for IP: 192.168.67.2
	I0203 15:01:17.251326   19862 certs.go:186] acquiring lock for shared ca certs: {Name:mkdec04c6cc16ac0dcab0ae849b602e6c1942576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:01:17.251531   19862 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key
	I0203 15:01:17.251598   19862 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key
	I0203 15:01:17.251645   19862 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.key
	I0203 15:01:17.251658   19862 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt with IP's: []
	I0203 15:01:17.308371   19862 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt ...
	I0203 15:01:17.308393   19862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: {Name:mkcd67f923f0e04e9cb8601b4f51bdbba081a5b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:01:17.308744   19862 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.key ...
	I0203 15:01:17.308753   19862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.key: {Name:mk70ceb7b3519640f17dd771d0c2bd69ce7cc490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:01:17.308997   19862 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.key.c7fa3a9e
	I0203 15:01:17.309013   19862 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0203 15:01:17.493340   19862 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.crt.c7fa3a9e ...
	I0203 15:01:17.493354   19862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.crt.c7fa3a9e: {Name:mk4c68462e1ad59d00cde0439cfe934ca2ae3f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:01:17.493649   19862 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.key.c7fa3a9e ...
	I0203 15:01:17.493657   19862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.key.c7fa3a9e: {Name:mk6043d01e63efdbb835239f8296706533d4c028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:01:17.493845   19862 certs.go:333] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.crt
	I0203 15:01:17.494026   19862 certs.go:337] copying /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.key
	I0203 15:01:17.494176   19862 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.key
	I0203 15:01:17.494190   19862 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.crt with IP's: []
	I0203 15:01:17.607914   19862 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.crt ...
	I0203 15:01:17.607928   19862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.crt: {Name:mkae54fd2748717ce4cbf8a7ee086e5a32dab62c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:01:17.608271   19862 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.key ...
	I0203 15:01:17.608283   19862 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.key: {Name:mka343fe3e9552cc05d63d3c04c58596827e0204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:01:17.608725   19862 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem (1338 bytes)
	W0203 15:01:17.608775   19862 certs.go:397] ignoring /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568_empty.pem, impossibly tiny 0 bytes
	I0203 15:01:17.608786   19862 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem (1675 bytes)
	I0203 15:01:17.608820   19862 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem (1078 bytes)
	I0203 15:01:17.608850   19862 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem (1123 bytes)
	I0203 15:01:17.608881   19862 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem (1675 bytes)
	I0203 15:01:17.608952   19862 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:01:17.609472   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 15:01:17.628864   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 15:01:17.648306   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 15:01:17.668005   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 15:01:17.687979   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 15:01:17.707296   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 15:01:17.727205   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 15:01:17.746651   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 15:01:17.767003   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 15:01:17.786956   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem --> /usr/share/ca-certificates/2568.pem (1338 bytes)
	I0203 15:01:17.805797   19862 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /usr/share/ca-certificates/25682.pem (1708 bytes)
	I0203 15:01:17.825589   19862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 15:01:17.840730   19862 ssh_runner.go:195] Run: openssl version
	I0203 15:01:17.847780   19862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25682.pem && ln -fs /usr/share/ca-certificates/25682.pem /etc/ssl/certs/25682.pem"
	I0203 15:01:17.857683   19862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25682.pem
	I0203 15:01:17.862131   19862 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:13 /usr/share/ca-certificates/25682.pem
	I0203 15:01:17.862185   19862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25682.pem
	I0203 15:01:17.868421   19862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25682.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 15:01:17.877780   19862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 15:01:17.887295   19862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:01:17.892630   19862 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:01:17.892697   19862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:01:17.899298   19862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 15:01:17.908352   19862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2568.pem && ln -fs /usr/share/ca-certificates/2568.pem /etc/ssl/certs/2568.pem"
	I0203 15:01:17.917372   19862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2568.pem
	I0203 15:01:17.921625   19862 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:13 /usr/share/ca-certificates/2568.pem
	I0203 15:01:17.921677   19862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2568.pem
	I0203 15:01:17.927528   19862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2568.pem /etc/ssl/certs/51391683.0"
	I0203 15:01:17.936330   19862 kubeadm.go:401] StartCluster: {Name:old-k8s-version-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-136000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:01:17.936447   19862 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 15:01:17.960144   19862 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 15:01:17.968323   19862 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 15:01:17.975885   19862 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 15:01:17.975943   19862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 15:01:17.984297   19862 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 15:01:17.984320   19862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 15:01:18.035727   19862 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0203 15:01:18.035771   19862 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 15:01:18.388563   19862 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 15:01:18.388681   19862 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 15:01:18.388773   19862 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 15:01:18.639046   19862 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 15:01:18.640145   19862 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 15:01:18.646775   19862 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0203 15:01:18.713795   19862 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 15:01:18.756068   19862 out.go:204]   - Generating certificates and keys ...
	I0203 15:01:18.756169   19862 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 15:01:18.756261   19862 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 15:01:18.780053   19862 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0203 15:01:18.933774   19862 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0203 15:01:19.265340   19862 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0203 15:01:19.363543   19862 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0203 15:01:19.613185   19862 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0203 15:01:19.613325   19862 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-136000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0203 15:01:19.705479   19862 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0203 15:01:19.705922   19862 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-136000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0203 15:01:19.767479   19862 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0203 15:01:20.038350   19862 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0203 15:01:20.195253   19862 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0203 15:01:20.195321   19862 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 15:01:20.310360   19862 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 15:01:20.420421   19862 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 15:01:20.532378   19862 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 15:01:20.760933   19862 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 15:01:20.761544   19862 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 15:01:20.783222   19862 out.go:204]   - Booting up control plane ...
	I0203 15:01:20.783299   19862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 15:01:20.783377   19862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 15:01:20.783462   19862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 15:01:20.783534   19862 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 15:01:20.783655   19862 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 15:02:00.772022   19862 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 15:02:00.773035   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:02:00.773265   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:02:05.774183   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:02:05.774363   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:02:15.775108   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:02:15.775248   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:02:35.777566   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:02:35.777845   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:03:15.787487   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:03:15.787744   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:03:15.787762   19862 kubeadm.go:322] 
	I0203 15:03:15.787853   19862 kubeadm.go:322] Unfortunately, an error has occurred:
	I0203 15:03:15.787891   19862 kubeadm.go:322] 	timed out waiting for the condition
	I0203 15:03:15.787897   19862 kubeadm.go:322] 
	I0203 15:03:15.787923   19862 kubeadm.go:322] This error is likely caused by:
	I0203 15:03:15.787961   19862 kubeadm.go:322] 	- The kubelet is not running
	I0203 15:03:15.788068   19862 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 15:03:15.788078   19862 kubeadm.go:322] 
	I0203 15:03:15.788170   19862 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 15:03:15.788201   19862 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0203 15:03:15.788222   19862 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0203 15:03:15.788225   19862 kubeadm.go:322] 
	I0203 15:03:15.788313   19862 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 15:03:15.788387   19862 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0203 15:03:15.788456   19862 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0203 15:03:15.788501   19862 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0203 15:03:15.788573   19862 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0203 15:03:15.788604   19862 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0203 15:03:15.791394   19862 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 15:03:15.791467   19862 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 15:03:15.791580   19862 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0203 15:03:15.791668   19862 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 15:03:15.791732   19862 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 15:03:15.791810   19862 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0203 15:03:15.791966   19862 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-136000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-136000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-136000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-136000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 15:03:15.791997   19862 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0203 15:03:16.209087   19862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 15:03:16.219804   19862 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 15:03:16.219878   19862 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 15:03:16.227956   19862 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 15:03:16.227990   19862 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 15:03:16.276561   19862 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0203 15:03:16.276635   19862 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 15:03:16.578090   19862 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 15:03:16.578181   19862 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 15:03:16.578273   19862 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 15:03:16.824121   19862 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 15:03:16.824905   19862 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 15:03:16.832036   19862 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0203 15:03:16.895111   19862 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 15:03:16.917255   19862 out.go:204]   - Generating certificates and keys ...
	I0203 15:03:16.917345   19862 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 15:03:16.917397   19862 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 15:03:16.917473   19862 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 15:03:16.917535   19862 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0203 15:03:16.917602   19862 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 15:03:16.917666   19862 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0203 15:03:16.917726   19862 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0203 15:03:16.917828   19862 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0203 15:03:16.917919   19862 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 15:03:16.917992   19862 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 15:03:16.918020   19862 kubeadm.go:322] [certs] Using the existing "sa" key
	I0203 15:03:16.918058   19862 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 15:03:17.005004   19862 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 15:03:17.192175   19862 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 15:03:17.357187   19862 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 15:03:17.486700   19862 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 15:03:17.487263   19862 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 15:03:17.509528   19862 out.go:204]   - Booting up control plane ...
	I0203 15:03:17.509713   19862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 15:03:17.509846   19862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 15:03:17.509999   19862 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 15:03:17.510138   19862 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 15:03:17.510451   19862 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 15:03:57.496901   19862 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 15:03:57.497462   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:03:57.497676   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:04:02.498692   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:04:02.498918   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:04:12.500447   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:04:12.500671   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:04:32.502634   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:04:32.502878   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:05:12.505084   19862 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:05:12.505243   19862 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:05:12.505254   19862 kubeadm.go:322] 
	I0203 15:05:12.505305   19862 kubeadm.go:322] Unfortunately, an error has occurred:
	I0203 15:05:12.505357   19862 kubeadm.go:322] 	timed out waiting for the condition
	I0203 15:05:12.505364   19862 kubeadm.go:322] 
	I0203 15:05:12.505398   19862 kubeadm.go:322] This error is likely caused by:
	I0203 15:05:12.505442   19862 kubeadm.go:322] 	- The kubelet is not running
	I0203 15:05:12.505552   19862 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 15:05:12.505561   19862 kubeadm.go:322] 
	I0203 15:05:12.505636   19862 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 15:05:12.505658   19862 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0203 15:05:12.505682   19862 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0203 15:05:12.505687   19862 kubeadm.go:322] 
	I0203 15:05:12.505772   19862 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 15:05:12.505842   19862 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0203 15:05:12.505922   19862 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0203 15:05:12.505959   19862 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0203 15:05:12.506018   19862 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0203 15:05:12.506047   19862 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0203 15:05:12.508627   19862 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 15:05:12.508703   19862 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 15:05:12.508814   19862 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0203 15:05:12.508908   19862 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 15:05:12.508997   19862 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 15:05:12.509072   19862 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0203 15:05:12.509092   19862 kubeadm.go:403] StartCluster complete in 3m54.558739312s
	I0203 15:05:12.509190   19862 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:05:12.531869   19862 logs.go:279] 0 containers: []
	W0203 15:05:12.531882   19862 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:05:12.531950   19862 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:05:12.555067   19862 logs.go:279] 0 containers: []
	W0203 15:05:12.555079   19862 logs.go:281] No container was found matching "etcd"
	I0203 15:05:12.555147   19862 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:05:12.578514   19862 logs.go:279] 0 containers: []
	W0203 15:05:12.578533   19862 logs.go:281] No container was found matching "coredns"
	I0203 15:05:12.578599   19862 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:05:12.602409   19862 logs.go:279] 0 containers: []
	W0203 15:05:12.602432   19862 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:05:12.602510   19862 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:05:12.626377   19862 logs.go:279] 0 containers: []
	W0203 15:05:12.626394   19862 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:05:12.626485   19862 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:05:12.649203   19862 logs.go:279] 0 containers: []
	W0203 15:05:12.649216   19862 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:05:12.649296   19862 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:05:12.691137   19862 logs.go:279] 0 containers: []
	W0203 15:05:12.691150   19862 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:05:12.691220   19862 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:05:12.716772   19862 logs.go:279] 0 containers: []
	W0203 15:05:12.716786   19862 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:05:12.716793   19862 logs.go:124] Gathering logs for dmesg ...
	I0203 15:05:12.716802   19862 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:05:12.728969   19862 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:05:12.728982   19862 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:05:12.783506   19862 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:05:12.783519   19862 logs.go:124] Gathering logs for Docker ...
	I0203 15:05:12.783527   19862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:05:12.800613   19862 logs.go:124] Gathering logs for container status ...
	I0203 15:05:12.800627   19862 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:05:14.852190   19862 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05150261s)
	I0203 15:05:14.852319   19862 logs.go:124] Gathering logs for kubelet ...
	I0203 15:05:14.852326   19862 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0203 15:05:14.890034   19862 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 15:05:14.890060   19862 out.go:239] * 
	* 
	W0203 15:05:14.890189   19862 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 15:05:14.890207   19862 out.go:239] * 
	* 
	W0203 15:05:14.890923   19862 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 15:05:14.953455   19862 out.go:177] 
	W0203 15:05:14.995683   19862 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 15:05:14.995790   19862 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 15:05:14.995836   19862 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 15:05:15.058641   19862 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-136000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-136000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-136000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423",
	        "Created": "2023-02-03T23:01:11.889189264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280897,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T23:01:12.185155252Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hostname",
	        "HostsPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hosts",
	        "LogPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423-json.log",
	        "Name": "/old-k8s-version-136000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13c5a8c73f7d0b11fc92b00e1930aa2a554af42a7b3351904954459cbd927fce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55136"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13c5a8c73f7d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "845795d4cf37",
	                        "old-k8s-version-136000"
	                    ],
	                    "NetworkID": "a4c82c2a3592223db620bf95332091613324019646bbe58152af123c5085aba4",
	                    "EndpointID": "edd09a73a6c2e6bd1a1f9a964b90c22bb2999232e698e5ae8e29125bd541e10e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000
E0203 15:05:15.580652    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 6 (420.679574ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 15:05:15.629076   20961 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-136000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-136000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (251.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-136000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-136000 create -f testdata/busybox.yaml: exit status 1 (35.697406ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-136000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-136000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-136000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-136000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423",
	        "Created": "2023-02-03T23:01:11.889189264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280897,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T23:01:12.185155252Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hostname",
	        "HostsPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hosts",
	        "LogPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423-json.log",
	        "Name": "/old-k8s-version-136000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13c5a8c73f7d0b11fc92b00e1930aa2a554af42a7b3351904954459cbd927fce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55136"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13c5a8c73f7d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "845795d4cf37",
	                        "old-k8s-version-136000"
	                    ],
	                    "NetworkID": "a4c82c2a3592223db620bf95332091613324019646bbe58152af123c5085aba4",
	                    "EndpointID": "edd09a73a6c2e6bd1a1f9a964b90c22bb2999232e698e5ae8e29125bd541e10e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 6 (403.332869ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 15:05:16.127828   20974 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-136000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-136000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-136000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-136000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423",
	        "Created": "2023-02-03T23:01:11.889189264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280897,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T23:01:12.185155252Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hostname",
	        "HostsPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hosts",
	        "LogPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423-json.log",
	        "Name": "/old-k8s-version-136000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13c5a8c73f7d0b11fc92b00e1930aa2a554af42a7b3351904954459cbd927fce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55136"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13c5a8c73f7d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "845795d4cf37",
	                        "old-k8s-version-136000"
	                    ],
	                    "NetworkID": "a4c82c2a3592223db620bf95332091613324019646bbe58152af123c5085aba4",
	                    "EndpointID": "edd09a73a6c2e6bd1a1f9a964b90c22bb2999232e698e5ae8e29125bd541e10e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 6 (405.40554ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 15:05:16.591551   20986 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-136000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-136000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-136000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0203 15:05:16.860859    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:19.421596    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:23.861879    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:05:24.542690    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:34.784733    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:36.117341    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 15:05:37.643164    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:37.648625    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:37.659392    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:37.679554    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:37.720274    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:37.800376    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:37.961696    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:38.283859    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:38.924162    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:40.204430    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:41.955383    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:05:42.764817    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:47.885263    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:05:53.064140    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 15:05:55.265551    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:58.125735    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:06:02.398881    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:06:10.718073    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 15:06:18.606961    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:06:35.691303    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 15:06:36.228690    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:06:45.784198    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-136000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.188392681s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-136000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-136000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-136000 describe deploy/metrics-server -n kube-system: exit status 1 (35.20909ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-136000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-136000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-136000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-136000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423",
	        "Created": "2023-02-03T23:01:11.889189264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280897,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T23:01:12.185155252Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hostname",
	        "HostsPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hosts",
	        "LogPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423-json.log",
	        "Name": "/old-k8s-version-136000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "13c5a8c73f7d0b11fc92b00e1930aa2a554af42a7b3351904954459cbd927fce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55132"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55134"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55136"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/13c5a8c73f7d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "845795d4cf37",
	                        "old-k8s-version-136000"
	                    ],
	                    "NetworkID": "a4c82c2a3592223db620bf95332091613324019646bbe58152af123c5085aba4",
	                    "EndpointID": "edd09a73a6c2e6bd1a1f9a964b90c22bb2999232e698e5ae8e29125bd541e10e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 6 (398.524258ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 15:06:46.277618   21098 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-136000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-136000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (497.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-136000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0203 15:06:51.828932    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:06:59.570233    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:06:59.996902    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 15:07:18.125482    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:07:19.509355    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:07:45.816813    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:07:58.151121    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:08:18.551539    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:08:21.492471    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:08:30.626456    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 15:08:46.244821    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:09:01.925148    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:09:29.628368    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:10:14.279620    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:10:14.305005    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:10:37.649911    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:10:41.994976    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:10:53.070614    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 15:10:53.786843    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 15:11:05.336353    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
E0203 15:11:10.726049    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 15:11:35.696534    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 15:11:51.835540    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:12:00.004800    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-136000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m13.042385977s)

                                                
                                                
-- stdout --
	* [old-k8s-version-136000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-136000 in cluster old-k8s-version-136000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-136000" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 15:06:48.314931   21126 out.go:296] Setting OutFile to fd 1 ...
	I0203 15:06:48.315085   21126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:06:48.315090   21126 out.go:309] Setting ErrFile to fd 2...
	I0203 15:06:48.315094   21126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:06:48.315204   21126 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 15:06:48.315684   21126 out.go:303] Setting JSON to false
	I0203 15:06:48.334309   21126 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":3983,"bootTime":1675461625,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 15:06:48.334420   21126 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 15:06:48.356267   21126 out.go:177] * [old-k8s-version-136000] minikube v1.29.0 on Darwin 13.2
	I0203 15:06:48.398245   21126 notify.go:220] Checking for updates...
	I0203 15:06:48.420055   21126 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 15:06:48.461726   21126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:06:48.503610   21126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 15:06:48.524962   21126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 15:06:48.546149   21126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 15:06:48.567771   21126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 15:06:48.589590   21126 config.go:180] Loaded profile config "old-k8s-version-136000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0203 15:06:48.611968   21126 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0203 15:06:48.633871   21126 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 15:06:48.696132   21126 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 15:06:48.696266   21126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:06:48.840392   21126 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:06:48.746644679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:06:48.862276   21126 out.go:177] * Using the docker driver based on existing profile
	I0203 15:06:48.883105   21126 start.go:296] selected driver: docker
	I0203 15:06:48.883133   21126 start.go:857] validating driver "docker" against &{Name:old-k8s-version-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-136000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:06:48.883252   21126 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 15:06:48.886919   21126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:06:49.029804   21126 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:06:48.937432002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:06:49.029940   21126 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 15:06:49.029960   21126 cni.go:84] Creating CNI manager for ""
	I0203 15:06:49.029973   21126 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 15:06:49.029985   21126 start_flags.go:319] config:
	{Name:old-k8s-version-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:06:49.051707   21126 out.go:177] * Starting control plane node old-k8s-version-136000 in cluster old-k8s-version-136000
	I0203 15:06:49.073787   21126 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 15:06:49.095624   21126 out.go:177] * Pulling base image ...
	I0203 15:06:49.138514   21126 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 15:06:49.138610   21126 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0203 15:06:49.138639   21126 cache.go:57] Caching tarball of preloaded images
	I0203 15:06:49.138636   21126 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 15:06:49.138898   21126 preload.go:174] Found /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 15:06:49.138924   21126 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0203 15:06:49.139965   21126 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/config.json ...
	I0203 15:06:49.197137   21126 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 15:06:49.197155   21126 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 15:06:49.197175   21126 cache.go:193] Successfully downloaded all kic artifacts
	I0203 15:06:49.197235   21126 start.go:364] acquiring machines lock for old-k8s-version-136000: {Name:mk6d4a37aad431df09b59c262f13f34239bde2da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 15:06:49.197327   21126 start.go:368] acquired machines lock for "old-k8s-version-136000" in 71.801µs
	I0203 15:06:49.197350   21126 start.go:96] Skipping create...Using existing machine configuration
	I0203 15:06:49.197367   21126 fix.go:55] fixHost starting: 
	I0203 15:06:49.197625   21126 cli_runner.go:164] Run: docker container inspect old-k8s-version-136000 --format={{.State.Status}}
	I0203 15:06:49.254258   21126 fix.go:103] recreateIfNeeded on old-k8s-version-136000: state=Stopped err=<nil>
	W0203 15:06:49.254295   21126 fix.go:129] unexpected machine state, will restart: <nil>
	I0203 15:06:49.297653   21126 out.go:177] * Restarting existing docker container for "old-k8s-version-136000" ...
	I0203 15:06:49.319082   21126 cli_runner.go:164] Run: docker start old-k8s-version-136000
	I0203 15:06:49.652101   21126 cli_runner.go:164] Run: docker container inspect old-k8s-version-136000 --format={{.State.Status}}
	I0203 15:06:49.711709   21126 kic.go:426] container "old-k8s-version-136000" state is running.
	I0203 15:06:49.712279   21126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136000
	I0203 15:06:49.775510   21126 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/config.json ...
	I0203 15:06:49.775982   21126 machine.go:88] provisioning docker machine ...
	I0203 15:06:49.776009   21126 ubuntu.go:169] provisioning hostname "old-k8s-version-136000"
	I0203 15:06:49.776131   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:49.845338   21126 main.go:141] libmachine: Using SSH client type: native
	I0203 15:06:49.845535   21126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55352 <nil> <nil>}
	I0203 15:06:49.845547   21126 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-136000 && echo "old-k8s-version-136000" | sudo tee /etc/hostname
	I0203 15:06:50.004516   21126 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-136000
	
	I0203 15:06:50.004605   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:50.064289   21126 main.go:141] libmachine: Using SSH client type: native
	I0203 15:06:50.064459   21126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55352 <nil> <nil>}
	I0203 15:06:50.064474   21126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-136000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-136000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-136000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 15:06:50.192369   21126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:06:50.192387   21126 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
	I0203 15:06:50.192410   21126 ubuntu.go:177] setting up certificates
	I0203 15:06:50.192418   21126 provision.go:83] configureAuth start
	I0203 15:06:50.192492   21126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136000
	I0203 15:06:50.250087   21126 provision.go:138] copyHostCerts
	I0203 15:06:50.250188   21126 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
	I0203 15:06:50.250197   21126 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 15:06:50.250303   21126 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
	I0203 15:06:50.250518   21126 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
	I0203 15:06:50.250525   21126 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 15:06:50.250627   21126 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
	I0203 15:06:50.250823   21126 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
	I0203 15:06:50.250832   21126 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 15:06:50.250933   21126 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
	I0203 15:06:50.251059   21126 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-136000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-136000]
	I0203 15:06:50.445030   21126 provision.go:172] copyRemoteCerts
	I0203 15:06:50.445094   21126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 15:06:50.445147   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:50.502363   21126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55352 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:06:50.593197   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 15:06:50.610508   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0203 15:06:50.628058   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 15:06:50.645419   21126 provision.go:86] duration metric: configureAuth took 452.977513ms
	I0203 15:06:50.645432   21126 ubuntu.go:193] setting minikube options for container-runtime
	I0203 15:06:50.645607   21126 config.go:180] Loaded profile config "old-k8s-version-136000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0203 15:06:50.645683   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:50.703031   21126 main.go:141] libmachine: Using SSH client type: native
	I0203 15:06:50.703230   21126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55352 <nil> <nil>}
	I0203 15:06:50.703239   21126 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 15:06:50.830387   21126 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 15:06:50.830402   21126 ubuntu.go:71] root file system type: overlay
	I0203 15:06:50.830546   21126 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 15:06:50.830635   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:50.888549   21126 main.go:141] libmachine: Using SSH client type: native
	I0203 15:06:50.888734   21126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55352 <nil> <nil>}
	I0203 15:06:50.888782   21126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 15:06:51.024357   21126 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 15:06:51.024456   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:51.081980   21126 main.go:141] libmachine: Using SSH client type: native
	I0203 15:06:51.082138   21126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55352 <nil> <nil>}
	I0203 15:06:51.082158   21126 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 15:06:51.213125   21126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:06:51.213140   21126 machine.go:91] provisioned docker machine in 1.437116558s
	I0203 15:06:51.213146   21126 start.go:300] post-start starting for "old-k8s-version-136000" (driver="docker")
	I0203 15:06:51.213151   21126 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 15:06:51.213214   21126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 15:06:51.213266   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:51.272624   21126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55352 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:06:51.364830   21126 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 15:06:51.368474   21126 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 15:06:51.368492   21126 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 15:06:51.368500   21126 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 15:06:51.368504   21126 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 15:06:51.368513   21126 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
	I0203 15:06:51.368617   21126 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
	I0203 15:06:51.368798   21126 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
	I0203 15:06:51.368992   21126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 15:06:51.376325   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:06:51.394063   21126 start.go:303] post-start completed in 180.903217ms
	I0203 15:06:51.394141   21126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 15:06:51.394197   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:51.451989   21126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55352 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:06:51.540535   21126 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 15:06:51.545331   21126 fix.go:57] fixHost completed within 2.347916902s
	I0203 15:06:51.545345   21126 start.go:83] releasing machines lock for "old-k8s-version-136000", held for 2.34795758s
	I0203 15:06:51.545458   21126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136000
	I0203 15:06:51.602012   21126 ssh_runner.go:195] Run: cat /version.json
	I0203 15:06:51.602029   21126 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0203 15:06:51.602072   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:51.602110   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:51.662360   21126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55352 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:06:51.662413   21126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55352 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/old-k8s-version-136000/id_rsa Username:docker}
	I0203 15:06:51.975044   21126 ssh_runner.go:195] Run: systemctl --version
	I0203 15:06:51.980182   21126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0203 15:06:51.985147   21126 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0203 15:06:51.985223   21126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0203 15:06:51.992858   21126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0203 15:06:52.000352   21126 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0203 15:06:52.000367   21126 start.go:483] detecting cgroup driver to use...
	I0203 15:06:52.000377   21126 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:06:52.000474   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:06:52.013887   21126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0203 15:06:52.022502   21126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 15:06:52.031385   21126 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 15:06:52.031447   21126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 15:06:52.040446   21126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:06:52.049965   21126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 15:06:52.058636   21126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:06:52.067207   21126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 15:06:52.075537   21126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 15:06:52.084296   21126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 15:06:52.091531   21126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 15:06:52.098645   21126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:06:52.162083   21126 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 15:06:52.237053   21126 start.go:483] detecting cgroup driver to use...
	I0203 15:06:52.237073   21126 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:06:52.237138   21126 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 15:06:52.247593   21126 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 15:06:52.247658   21126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 15:06:52.257626   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:06:52.271471   21126 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 15:06:52.337377   21126 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 15:06:52.418099   21126 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 15:06:52.418115   21126 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 15:06:52.431499   21126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:06:52.531154   21126 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 15:06:52.735987   21126 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:06:52.765394   21126 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:06:52.839487   21126 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0203 15:06:52.839608   21126 cli_runner.go:164] Run: docker exec -t old-k8s-version-136000 dig +short host.docker.internal
	I0203 15:06:52.954202   21126 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0203 15:06:52.954321   21126 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0203 15:06:52.958605   21126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:06:52.968669   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:53.025800   21126 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 15:06:53.025873   21126 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 15:06:53.050570   21126 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0203 15:06:53.050588   21126 docker.go:560] Images already preloaded, skipping extraction
	I0203 15:06:53.050668   21126 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 15:06:53.075731   21126 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0203 15:06:53.075756   21126 cache_images.go:84] Images are preloaded, skipping loading
	I0203 15:06:53.075841   21126 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 15:06:53.145233   21126 cni.go:84] Creating CNI manager for ""
	I0203 15:06:53.145249   21126 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 15:06:53.145265   21126 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 15:06:53.145279   21126 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-136000 NodeName:old-k8s-version-136000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 15:06:53.145384   21126 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-136000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-136000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 15:06:53.145473   21126 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-136000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0203 15:06:53.145535   21126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0203 15:06:53.153812   21126 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 15:06:53.153880   21126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 15:06:53.161356   21126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0203 15:06:53.174218   21126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 15:06:53.188975   21126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0203 15:06:53.202162   21126 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0203 15:06:53.206300   21126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:06:53.216432   21126 certs.go:56] Setting up /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000 for IP: 192.168.67.2
	I0203 15:06:53.216451   21126 certs.go:186] acquiring lock for shared ca certs: {Name:mkdec04c6cc16ac0dcab0ae849b602e6c1942576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:06:53.216637   21126 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key
	I0203 15:06:53.216703   21126 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key
	I0203 15:06:53.216809   21126 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.key
	I0203 15:06:53.216882   21126 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.key.c7fa3a9e
	I0203 15:06:53.216947   21126 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.key
	I0203 15:06:53.217164   21126 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem (1338 bytes)
	W0203 15:06:53.217205   21126 certs.go:397] ignoring /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568_empty.pem, impossibly tiny 0 bytes
	I0203 15:06:53.217215   21126 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem (1675 bytes)
	I0203 15:06:53.217252   21126 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem (1078 bytes)
	I0203 15:06:53.217285   21126 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem (1123 bytes)
	I0203 15:06:53.217318   21126 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem (1675 bytes)
	I0203 15:06:53.217385   21126 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:06:53.218002   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 15:06:53.235767   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 15:06:53.253495   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 15:06:53.271514   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 15:06:53.288839   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 15:06:53.306302   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 15:06:53.323921   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 15:06:53.355882   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 15:06:53.373173   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /usr/share/ca-certificates/25682.pem (1708 bytes)
	I0203 15:06:53.390861   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 15:06:53.408186   21126 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem --> /usr/share/ca-certificates/2568.pem (1338 bytes)
	I0203 15:06:53.425631   21126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 15:06:53.438800   21126 ssh_runner.go:195] Run: openssl version
	I0203 15:06:53.444704   21126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25682.pem && ln -fs /usr/share/ca-certificates/25682.pem /etc/ssl/certs/25682.pem"
	I0203 15:06:53.452976   21126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25682.pem
	I0203 15:06:53.457051   21126 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:13 /usr/share/ca-certificates/25682.pem
	I0203 15:06:53.457103   21126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25682.pem
	I0203 15:06:53.462665   21126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25682.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 15:06:53.470355   21126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 15:06:53.478569   21126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:06:53.482655   21126 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:06:53.482705   21126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:06:53.488129   21126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 15:06:53.495866   21126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2568.pem && ln -fs /usr/share/ca-certificates/2568.pem /etc/ssl/certs/2568.pem"
	I0203 15:06:53.504248   21126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2568.pem
	I0203 15:06:53.508485   21126 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:13 /usr/share/ca-certificates/2568.pem
	I0203 15:06:53.508534   21126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2568.pem
	I0203 15:06:53.514231   21126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2568.pem /etc/ssl/certs/51391683.0"
	I0203 15:06:53.521725   21126 kubeadm.go:401] StartCluster: {Name:old-k8s-version-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-136000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:06:53.521850   21126 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 15:06:53.545386   21126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 15:06:53.553352   21126 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0203 15:06:53.553369   21126 kubeadm.go:633] restartCluster start
	I0203 15:06:53.553424   21126 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 15:06:53.560424   21126 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:53.560495   21126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-136000
	I0203 15:06:53.618019   21126 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-136000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:06:53.618181   21126 kubeconfig.go:146] "old-k8s-version-136000" context is missing from /Users/jenkins/minikube-integration/15770-1719/kubeconfig - will repair!
	I0203 15:06:53.618494   21126 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/kubeconfig: {Name:mkf113f45b09a6304f4248a99f0e16d42a3468fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:06:53.619882   21126 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 15:06:53.627809   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:53.627877   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:53.636689   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:54.138104   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:54.138307   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:54.149279   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:54.638479   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:54.638697   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:54.649903   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:55.137069   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:55.137140   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:55.147374   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:55.636849   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:55.636977   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:55.647325   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:56.138846   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:56.139084   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:56.150466   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:56.637034   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:56.637208   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:56.648342   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:57.137645   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:57.137885   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:57.148861   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:57.636898   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:57.637026   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:57.648176   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:58.137269   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:58.137496   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:58.148716   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:58.638944   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:58.639145   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:58.650002   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:59.137336   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:59.137479   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:59.148534   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:06:59.636947   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:06:59.637032   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:06:59.646779   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:00.138031   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:00.138213   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:00.149405   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:00.638962   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:00.639197   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:00.650363   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:01.136955   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:01.137180   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:01.147872   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:01.636977   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:01.637192   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:01.648381   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:02.137354   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:02.137427   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:02.147001   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:02.637111   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:02.637337   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:02.648376   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:03.139044   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:03.139207   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:03.150232   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:03.637065   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:03.637185   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:03.648029   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:03.648041   21126 api_server.go:165] Checking apiserver status ...
	I0203 15:07:03.648102   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:07:03.656468   21126 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:07:03.656481   21126 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0203 15:07:03.656490   21126 kubeadm.go:1120] stopping kube-system containers ...
	I0203 15:07:03.656561   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 15:07:03.679288   21126 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 15:07:03.690185   21126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 15:07:03.697923   21126 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Feb  3 23:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Feb  3 23:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Feb  3 23:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Feb  3 23:03 /etc/kubernetes/scheduler.conf
	
	I0203 15:07:03.697981   21126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 15:07:03.705537   21126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 15:07:03.713027   21126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 15:07:03.720843   21126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 15:07:03.728591   21126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 15:07:03.736573   21126 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0203 15:07:03.736585   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:07:03.789019   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:07:04.562436   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:07:04.766917   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:07:04.827273   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:07:04.908565   21126 api_server.go:51] waiting for apiserver process to appear ...
	I0203 15:07:04.908633   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:05.418320   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:05.919429   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:06.417893   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:06.917677   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:07.418169   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:07.918051   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:08.418666   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:08.918207   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:09.418490   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:09.918502   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:10.418193   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:10.918269   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:11.418112   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:11.919070   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:12.418240   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:12.918719   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:13.417919   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:13.918341   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:14.418044   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:14.917965   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:15.418680   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:15.918084   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:16.418251   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:16.918082   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:17.417926   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:17.918157   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:18.418072   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:18.920011   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:19.418106   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:19.918102   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:20.420160   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:20.918335   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:21.418633   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:21.918735   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:22.419144   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:22.917989   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:23.418932   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:23.918168   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:24.418854   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:24.918658   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:25.420175   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:25.918225   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:26.420173   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:26.918351   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:27.418560   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:27.918174   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:28.418217   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:28.918279   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:29.418223   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:29.918172   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:30.418244   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:30.918451   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:31.418220   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:31.918655   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:32.418919   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:32.918397   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:33.418360   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:33.918971   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:34.418779   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:34.918382   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:35.418542   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:35.918409   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:36.419365   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:36.918563   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:37.418765   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:37.918395   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:38.418903   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:38.919241   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:39.418665   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:39.918402   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:40.418959   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:40.919940   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:41.418717   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:41.919365   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:42.418621   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:42.918993   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:43.419521   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:43.919261   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:44.418768   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:44.920528   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:45.418554   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:45.920068   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:46.419016   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:46.919515   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:47.419377   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:47.920020   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:48.419190   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:48.919932   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:49.418829   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:49.919444   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:50.418918   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:50.919028   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:51.419197   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:51.918805   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:52.418719   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:52.918695   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:53.418820   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:53.919146   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:54.418883   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:54.918732   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:55.418739   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:55.918736   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:56.418777   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:56.919276   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:57.419127   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:57.918974   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:58.418887   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:58.918807   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:59.418886   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:07:59.918844   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:00.418912   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:00.919172   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:01.418829   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:01.918881   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:02.419106   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:02.919147   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:03.418980   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:03.919501   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:04.419077   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:04.919033   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:04.944614   21126 logs.go:279] 0 containers: []
	W0203 15:08:04.944628   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:04.944705   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:04.966867   21126 logs.go:279] 0 containers: []
	W0203 15:08:04.966880   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:04.966951   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:04.989957   21126 logs.go:279] 0 containers: []
	W0203 15:08:04.989973   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:04.990072   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:05.016541   21126 logs.go:279] 0 containers: []
	W0203 15:08:05.016554   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:05.016626   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:05.039725   21126 logs.go:279] 0 containers: []
	W0203 15:08:05.039740   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:05.039823   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:05.063579   21126 logs.go:279] 0 containers: []
	W0203 15:08:05.063596   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:05.063670   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:05.087221   21126 logs.go:279] 0 containers: []
	W0203 15:08:05.087234   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:05.087310   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:05.113141   21126 logs.go:279] 0 containers: []
	W0203 15:08:05.113159   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:05.113170   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:05.113190   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:05.174468   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:05.174479   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:05.174488   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:05.191580   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:05.191598   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:07.246161   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054501574s)
	I0203 15:08:07.246345   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:07.246359   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:07.284450   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:07.284465   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:09.798087   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:09.920159   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:09.945049   21126 logs.go:279] 0 containers: []
	W0203 15:08:09.945062   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:09.945129   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:09.968784   21126 logs.go:279] 0 containers: []
	W0203 15:08:09.968797   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:09.968871   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:09.991966   21126 logs.go:279] 0 containers: []
	W0203 15:08:09.991979   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:09.992048   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:10.014602   21126 logs.go:279] 0 containers: []
	W0203 15:08:10.014615   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:10.014693   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:10.037625   21126 logs.go:279] 0 containers: []
	W0203 15:08:10.037638   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:10.037713   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:10.060358   21126 logs.go:279] 0 containers: []
	W0203 15:08:10.060371   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:10.060470   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:10.083097   21126 logs.go:279] 0 containers: []
	W0203 15:08:10.083111   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:10.083180   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:10.106017   21126 logs.go:279] 0 containers: []
	W0203 15:08:10.106030   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:10.106053   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:10.106060   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:10.145760   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:10.145778   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:10.158538   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:10.158553   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:10.216283   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:10.216297   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:10.216304   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:10.231790   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:10.231804   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:12.281042   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049177083s)
	I0203 15:08:14.782514   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:14.921275   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:14.946694   21126 logs.go:279] 0 containers: []
	W0203 15:08:14.946708   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:14.946779   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:14.969578   21126 logs.go:279] 0 containers: []
	W0203 15:08:14.969591   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:14.969659   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:14.993188   21126 logs.go:279] 0 containers: []
	W0203 15:08:14.993202   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:14.993269   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:15.016768   21126 logs.go:279] 0 containers: []
	W0203 15:08:15.016781   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:15.016850   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:15.039254   21126 logs.go:279] 0 containers: []
	W0203 15:08:15.039268   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:15.039337   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:15.061933   21126 logs.go:279] 0 containers: []
	W0203 15:08:15.061946   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:15.062013   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:15.084394   21126 logs.go:279] 0 containers: []
	W0203 15:08:15.084407   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:15.084479   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:15.108278   21126 logs.go:279] 0 containers: []
	W0203 15:08:15.108291   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:15.108298   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:15.108308   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:15.147607   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:15.147620   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:15.159781   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:15.159795   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:15.213740   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:15.213757   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:15.213763   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:15.229288   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:15.229302   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:17.280112   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05075047s)
	I0203 15:08:19.780893   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:19.919372   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:19.944964   21126 logs.go:279] 0 containers: []
	W0203 15:08:19.944976   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:19.945048   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:19.968601   21126 logs.go:279] 0 containers: []
	W0203 15:08:19.968614   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:19.968684   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:19.991556   21126 logs.go:279] 0 containers: []
	W0203 15:08:19.991571   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:19.991644   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:20.014170   21126 logs.go:279] 0 containers: []
	W0203 15:08:20.014184   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:20.014253   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:20.038506   21126 logs.go:279] 0 containers: []
	W0203 15:08:20.038520   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:20.038605   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:20.061249   21126 logs.go:279] 0 containers: []
	W0203 15:08:20.061261   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:20.061336   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:20.085213   21126 logs.go:279] 0 containers: []
	W0203 15:08:20.085228   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:20.085298   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:20.109153   21126 logs.go:279] 0 containers: []
	W0203 15:08:20.109167   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:20.109174   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:20.109183   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:22.160248   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051005777s)
	I0203 15:08:22.160359   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:22.160366   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:22.198427   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:22.198453   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:22.210652   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:22.210664   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:22.265460   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:22.265476   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:22.265483   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:24.780888   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:24.921640   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:24.947616   21126 logs.go:279] 0 containers: []
	W0203 15:08:24.947628   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:24.947699   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:24.970800   21126 logs.go:279] 0 containers: []
	W0203 15:08:24.970818   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:24.970892   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:24.994162   21126 logs.go:279] 0 containers: []
	W0203 15:08:24.994177   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:24.994252   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:25.017714   21126 logs.go:279] 0 containers: []
	W0203 15:08:25.017728   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:25.017798   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:25.041590   21126 logs.go:279] 0 containers: []
	W0203 15:08:25.041602   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:25.041668   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:25.064819   21126 logs.go:279] 0 containers: []
	W0203 15:08:25.064832   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:25.064900   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:25.086315   21126 logs.go:279] 0 containers: []
	W0203 15:08:25.086328   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:25.086394   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:25.109880   21126 logs.go:279] 0 containers: []
	W0203 15:08:25.109893   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:25.109901   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:25.109910   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:25.150541   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:25.150561   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:25.163629   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:25.163645   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:25.224576   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:25.224595   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:25.224603   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:25.240068   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:25.240082   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:27.291243   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051103151s)
	I0203 15:08:29.791691   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:29.921623   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:29.947537   21126 logs.go:279] 0 containers: []
	W0203 15:08:29.947551   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:29.947622   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:29.970931   21126 logs.go:279] 0 containers: []
	W0203 15:08:29.970943   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:29.971017   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:29.994547   21126 logs.go:279] 0 containers: []
	W0203 15:08:29.994561   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:29.994636   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:30.017314   21126 logs.go:279] 0 containers: []
	W0203 15:08:30.017327   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:30.017399   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:30.041470   21126 logs.go:279] 0 containers: []
	W0203 15:08:30.041484   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:30.041552   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:30.064233   21126 logs.go:279] 0 containers: []
	W0203 15:08:30.064249   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:30.064319   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:30.087274   21126 logs.go:279] 0 containers: []
	W0203 15:08:30.087289   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:30.087363   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:30.110980   21126 logs.go:279] 0 containers: []
	W0203 15:08:30.110993   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:30.111001   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:30.111009   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:32.161732   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050663086s)
	I0203 15:08:32.161843   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:32.161850   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:32.199591   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:32.199605   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:32.211666   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:32.211680   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:32.265938   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:32.265953   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:32.265968   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:34.783784   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:34.920421   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:34.945200   21126 logs.go:279] 0 containers: []
	W0203 15:08:34.945212   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:34.945279   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:34.968809   21126 logs.go:279] 0 containers: []
	W0203 15:08:34.968823   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:34.968895   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:34.992095   21126 logs.go:279] 0 containers: []
	W0203 15:08:34.992108   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:34.992175   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:35.014659   21126 logs.go:279] 0 containers: []
	W0203 15:08:35.014672   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:35.014743   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:35.037444   21126 logs.go:279] 0 containers: []
	W0203 15:08:35.037458   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:35.037527   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:35.061155   21126 logs.go:279] 0 containers: []
	W0203 15:08:35.061167   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:35.061236   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:35.084823   21126 logs.go:279] 0 containers: []
	W0203 15:08:35.084836   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:35.084918   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:35.107602   21126 logs.go:279] 0 containers: []
	W0203 15:08:35.107617   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:35.107624   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:35.107631   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:35.123043   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:35.123057   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:37.172267   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049150839s)
	I0203 15:08:37.172377   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:37.172384   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:37.211206   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:37.211221   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:37.223659   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:37.223675   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:37.278496   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:39.778894   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:39.920792   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:39.946674   21126 logs.go:279] 0 containers: []
	W0203 15:08:39.946688   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:39.946764   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:39.969919   21126 logs.go:279] 0 containers: []
	W0203 15:08:39.969934   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:39.970004   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:39.993498   21126 logs.go:279] 0 containers: []
	W0203 15:08:39.993511   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:39.993587   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:40.017113   21126 logs.go:279] 0 containers: []
	W0203 15:08:40.017126   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:40.017199   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:40.040438   21126 logs.go:279] 0 containers: []
	W0203 15:08:40.040450   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:40.040525   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:40.064230   21126 logs.go:279] 0 containers: []
	W0203 15:08:40.064242   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:40.064309   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:40.086734   21126 logs.go:279] 0 containers: []
	W0203 15:08:40.086747   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:40.086820   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:40.110839   21126 logs.go:279] 0 containers: []
	W0203 15:08:40.110852   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:40.110859   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:40.110866   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:40.127613   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:40.127626   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:42.178852   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051165894s)
	I0203 15:08:42.178960   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:42.178966   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:42.216525   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:42.216541   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:42.228446   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:42.228462   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:42.283842   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:44.785807   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:44.919948   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:44.945494   21126 logs.go:279] 0 containers: []
	W0203 15:08:44.945509   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:44.945579   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:44.968946   21126 logs.go:279] 0 containers: []
	W0203 15:08:44.968959   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:44.969027   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:44.992848   21126 logs.go:279] 0 containers: []
	W0203 15:08:44.992862   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:44.992928   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:45.016476   21126 logs.go:279] 0 containers: []
	W0203 15:08:45.016490   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:45.016556   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:45.040151   21126 logs.go:279] 0 containers: []
	W0203 15:08:45.040164   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:45.040254   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:45.063487   21126 logs.go:279] 0 containers: []
	W0203 15:08:45.063501   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:45.063572   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:45.086261   21126 logs.go:279] 0 containers: []
	W0203 15:08:45.086275   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:45.086334   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:45.110125   21126 logs.go:279] 0 containers: []
	W0203 15:08:45.110141   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:45.110150   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:45.110159   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:45.147479   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:45.147493   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:45.159683   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:45.159700   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:45.214579   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:45.214598   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:45.214604   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:45.230010   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:45.230023   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:47.281135   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051053543s)
	I0203 15:08:49.781613   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:49.920642   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:49.944642   21126 logs.go:279] 0 containers: []
	W0203 15:08:49.944656   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:49.944728   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:49.968754   21126 logs.go:279] 0 containers: []
	W0203 15:08:49.968767   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:49.968835   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:49.992023   21126 logs.go:279] 0 containers: []
	W0203 15:08:49.992036   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:49.992103   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:50.015528   21126 logs.go:279] 0 containers: []
	W0203 15:08:50.015542   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:50.015611   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:50.038771   21126 logs.go:279] 0 containers: []
	W0203 15:08:50.038784   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:50.038854   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:50.062467   21126 logs.go:279] 0 containers: []
	W0203 15:08:50.062484   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:50.062558   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:50.086718   21126 logs.go:279] 0 containers: []
	W0203 15:08:50.086732   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:50.086808   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:50.109633   21126 logs.go:279] 0 containers: []
	W0203 15:08:50.109647   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:50.109655   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:50.109663   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:50.148161   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:50.148176   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:50.160196   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:50.160210   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:50.214909   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:50.214920   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:50.214927   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:50.230188   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:50.230202   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:52.280250   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049990318s)
	I0203 15:08:54.782038   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:54.920312   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:54.949144   21126 logs.go:279] 0 containers: []
	W0203 15:08:54.949159   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:54.949227   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:54.973196   21126 logs.go:279] 0 containers: []
	W0203 15:08:54.973210   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:54.973278   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:54.996982   21126 logs.go:279] 0 containers: []
	W0203 15:08:54.996995   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:54.997062   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:08:55.021017   21126 logs.go:279] 0 containers: []
	W0203 15:08:55.021030   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:08:55.021099   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:08:55.045113   21126 logs.go:279] 0 containers: []
	W0203 15:08:55.045127   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:08:55.045195   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:08:55.068243   21126 logs.go:279] 0 containers: []
	W0203 15:08:55.068256   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:08:55.068326   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:08:55.091638   21126 logs.go:279] 0 containers: []
	W0203 15:08:55.091652   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:08:55.091720   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:08:55.115080   21126 logs.go:279] 0 containers: []
	W0203 15:08:55.115093   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:08:55.115100   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:08:55.115107   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:08:55.128233   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:08:55.128255   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:08:55.209872   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:08:55.209885   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:08:55.209892   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:08:55.225337   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:08:55.225353   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:08:57.275635   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050223527s)
	I0203 15:08:57.275743   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:08:57.275750   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:08:59.814356   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:08:59.920728   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:08:59.945998   21126 logs.go:279] 0 containers: []
	W0203 15:08:59.946011   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:08:59.946085   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:08:59.969549   21126 logs.go:279] 0 containers: []
	W0203 15:08:59.969563   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:08:59.969634   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:08:59.992769   21126 logs.go:279] 0 containers: []
	W0203 15:08:59.992783   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:08:59.992852   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:00.015437   21126 logs.go:279] 0 containers: []
	W0203 15:09:00.015449   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:00.015516   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:00.039422   21126 logs.go:279] 0 containers: []
	W0203 15:09:00.039436   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:00.039506   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:00.061798   21126 logs.go:279] 0 containers: []
	W0203 15:09:00.061812   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:00.061888   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:00.084694   21126 logs.go:279] 0 containers: []
	W0203 15:09:00.084707   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:00.084777   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:00.107820   21126 logs.go:279] 0 containers: []
	W0203 15:09:00.107833   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:00.107840   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:00.107854   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:00.145001   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:00.145015   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:00.157075   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:00.157090   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:00.212467   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:00.212478   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:00.212484   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:00.227862   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:00.227876   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:02.278547   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050612533s)
	I0203 15:09:04.779960   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:04.920774   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:04.945613   21126 logs.go:279] 0 containers: []
	W0203 15:09:04.945626   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:04.945699   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:04.968881   21126 logs.go:279] 0 containers: []
	W0203 15:09:04.968893   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:04.968961   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:04.993496   21126 logs.go:279] 0 containers: []
	W0203 15:09:04.993510   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:04.993582   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:05.016313   21126 logs.go:279] 0 containers: []
	W0203 15:09:05.016326   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:05.016392   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:05.039898   21126 logs.go:279] 0 containers: []
	W0203 15:09:05.039912   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:05.039983   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:05.062833   21126 logs.go:279] 0 containers: []
	W0203 15:09:05.062849   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:05.062923   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:05.086074   21126 logs.go:279] 0 containers: []
	W0203 15:09:05.086087   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:05.086158   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:05.108548   21126 logs.go:279] 0 containers: []
	W0203 15:09:05.108563   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:05.108570   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:05.108578   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:05.163380   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:05.163391   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:05.163397   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:05.178553   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:05.178566   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:07.227638   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049014593s)
	I0203 15:09:07.227753   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:07.227761   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:07.265204   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:07.265218   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:09.779211   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:09.922284   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:09.948182   21126 logs.go:279] 0 containers: []
	W0203 15:09:09.948195   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:09.948266   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:09.971925   21126 logs.go:279] 0 containers: []
	W0203 15:09:09.971938   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:09.972005   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:09.994467   21126 logs.go:279] 0 containers: []
	W0203 15:09:09.994480   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:09.994555   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:10.018655   21126 logs.go:279] 0 containers: []
	W0203 15:09:10.018669   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:10.018742   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:10.042925   21126 logs.go:279] 0 containers: []
	W0203 15:09:10.042938   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:10.043006   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:10.065544   21126 logs.go:279] 0 containers: []
	W0203 15:09:10.065556   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:10.065626   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:10.089407   21126 logs.go:279] 0 containers: []
	W0203 15:09:10.089419   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:10.089495   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:10.112741   21126 logs.go:279] 0 containers: []
	W0203 15:09:10.112757   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:10.112767   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:10.112777   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:10.129001   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:10.129015   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:12.178606   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049533189s)
	I0203 15:09:12.178720   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:12.178727   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:12.216595   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:12.216609   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:12.228759   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:12.228773   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:12.283836   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:14.784383   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:14.922671   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:14.948240   21126 logs.go:279] 0 containers: []
	W0203 15:09:14.948253   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:14.948324   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:14.971903   21126 logs.go:279] 0 containers: []
	W0203 15:09:14.971917   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:14.971990   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:14.994575   21126 logs.go:279] 0 containers: []
	W0203 15:09:14.994589   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:14.994659   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:15.018648   21126 logs.go:279] 0 containers: []
	W0203 15:09:15.018662   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:15.018729   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:15.042017   21126 logs.go:279] 0 containers: []
	W0203 15:09:15.042035   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:15.042113   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:15.065426   21126 logs.go:279] 0 containers: []
	W0203 15:09:15.065438   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:15.065508   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:15.088987   21126 logs.go:279] 0 containers: []
	W0203 15:09:15.089002   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:15.089073   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:15.111809   21126 logs.go:279] 0 containers: []
	W0203 15:09:15.111825   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:15.111832   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:15.111841   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:15.149140   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:15.149157   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:15.161441   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:15.161455   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:15.214824   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:15.214835   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:15.214842   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:15.230044   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:15.230056   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:17.281711   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05159714s)
	I0203 15:09:19.784121   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:19.921340   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:19.947651   21126 logs.go:279] 0 containers: []
	W0203 15:09:19.947667   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:19.947743   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:19.973024   21126 logs.go:279] 0 containers: []
	W0203 15:09:19.973039   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:19.973108   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:19.997441   21126 logs.go:279] 0 containers: []
	W0203 15:09:19.997454   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:19.997522   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:20.020413   21126 logs.go:279] 0 containers: []
	W0203 15:09:20.020426   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:20.020495   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:20.042875   21126 logs.go:279] 0 containers: []
	W0203 15:09:20.042887   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:20.042961   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:20.065499   21126 logs.go:279] 0 containers: []
	W0203 15:09:20.065512   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:20.065579   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:20.088221   21126 logs.go:279] 0 containers: []
	W0203 15:09:20.088233   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:20.088307   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:20.111119   21126 logs.go:279] 0 containers: []
	W0203 15:09:20.111131   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:20.111138   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:20.111145   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:20.149767   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:20.149781   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:20.161563   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:20.161577   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:20.216366   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:20.216377   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:20.216383   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:20.231726   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:20.231739   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:22.281539   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049741781s)
	I0203 15:09:24.782437   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:24.920977   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:24.945903   21126 logs.go:279] 0 containers: []
	W0203 15:09:24.945915   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:24.945982   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:24.969547   21126 logs.go:279] 0 containers: []
	W0203 15:09:24.969563   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:24.969630   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:24.993882   21126 logs.go:279] 0 containers: []
	W0203 15:09:24.993894   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:24.993960   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:25.017432   21126 logs.go:279] 0 containers: []
	W0203 15:09:25.017446   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:25.017516   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:25.039258   21126 logs.go:279] 0 containers: []
	W0203 15:09:25.039271   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:25.039340   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:25.062198   21126 logs.go:279] 0 containers: []
	W0203 15:09:25.062211   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:25.062279   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:25.085797   21126 logs.go:279] 0 containers: []
	W0203 15:09:25.085810   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:25.085885   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:25.112875   21126 logs.go:279] 0 containers: []
	W0203 15:09:25.112892   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:25.112900   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:25.112909   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:25.195111   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:25.195128   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:25.195141   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:25.210826   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:25.210843   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:27.261710   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050808441s)
	I0203 15:09:27.261815   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:27.261822   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:27.298963   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:27.298977   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:29.812325   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:29.921462   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:29.946655   21126 logs.go:279] 0 containers: []
	W0203 15:09:29.946669   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:29.946743   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:29.969844   21126 logs.go:279] 0 containers: []
	W0203 15:09:29.969858   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:29.969948   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:29.995739   21126 logs.go:279] 0 containers: []
	W0203 15:09:29.995754   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:29.995855   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:30.019409   21126 logs.go:279] 0 containers: []
	W0203 15:09:30.019421   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:30.019504   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:30.042233   21126 logs.go:279] 0 containers: []
	W0203 15:09:30.042250   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:30.042324   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:30.064609   21126 logs.go:279] 0 containers: []
	W0203 15:09:30.064622   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:30.064690   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:30.087419   21126 logs.go:279] 0 containers: []
	W0203 15:09:30.087432   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:30.087500   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:30.111192   21126 logs.go:279] 0 containers: []
	W0203 15:09:30.111208   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:30.111217   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:30.111224   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:30.149400   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:30.149416   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:30.161800   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:30.161814   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:30.216362   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:30.216373   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:30.216379   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:30.232241   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:30.232255   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:32.282174   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04986045s)
	I0203 15:09:34.783738   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:34.921526   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:34.946744   21126 logs.go:279] 0 containers: []
	W0203 15:09:34.946757   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:34.946828   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:34.970218   21126 logs.go:279] 0 containers: []
	W0203 15:09:34.970231   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:34.970302   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:34.994148   21126 logs.go:279] 0 containers: []
	W0203 15:09:34.994161   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:34.994240   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:35.017114   21126 logs.go:279] 0 containers: []
	W0203 15:09:35.017127   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:35.017193   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:35.040407   21126 logs.go:279] 0 containers: []
	W0203 15:09:35.040420   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:35.040494   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:35.063588   21126 logs.go:279] 0 containers: []
	W0203 15:09:35.063601   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:35.063683   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:35.087637   21126 logs.go:279] 0 containers: []
	W0203 15:09:35.087649   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:35.087714   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:35.110750   21126 logs.go:279] 0 containers: []
	W0203 15:09:35.110764   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:35.110771   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:35.110787   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:35.149548   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:35.149565   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:35.162009   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:35.162028   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:35.217132   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:35.217142   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:35.217148   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:35.233277   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:35.233296   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:37.283667   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050308164s)
	I0203 15:09:39.784101   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:39.921159   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:39.946253   21126 logs.go:279] 0 containers: []
	W0203 15:09:39.946267   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:39.946339   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:39.969438   21126 logs.go:279] 0 containers: []
	W0203 15:09:39.969452   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:39.969526   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:39.993443   21126 logs.go:279] 0 containers: []
	W0203 15:09:39.993458   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:39.993530   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:40.017472   21126 logs.go:279] 0 containers: []
	W0203 15:09:40.017486   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:40.017555   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:40.040570   21126 logs.go:279] 0 containers: []
	W0203 15:09:40.040584   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:40.040652   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:40.063704   21126 logs.go:279] 0 containers: []
	W0203 15:09:40.063717   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:40.063783   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:40.086219   21126 logs.go:279] 0 containers: []
	W0203 15:09:40.086232   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:40.086301   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:40.108530   21126 logs.go:279] 0 containers: []
	W0203 15:09:40.108544   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:40.108551   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:40.108558   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:40.146878   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:40.146901   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:40.159866   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:40.159882   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:40.223868   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:40.223880   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:40.223887   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:40.239249   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:40.239262   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:42.290573   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051252806s)
	I0203 15:09:44.790922   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:44.921713   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:44.946346   21126 logs.go:279] 0 containers: []
	W0203 15:09:44.946360   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:44.946427   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:44.969887   21126 logs.go:279] 0 containers: []
	W0203 15:09:44.969900   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:44.969972   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:44.992924   21126 logs.go:279] 0 containers: []
	W0203 15:09:44.992937   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:44.993004   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:45.016446   21126 logs.go:279] 0 containers: []
	W0203 15:09:45.016460   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:45.016529   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:45.039164   21126 logs.go:279] 0 containers: []
	W0203 15:09:45.039177   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:45.039243   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:45.061405   21126 logs.go:279] 0 containers: []
	W0203 15:09:45.061420   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:45.061486   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:45.083309   21126 logs.go:279] 0 containers: []
	W0203 15:09:45.083322   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:45.083391   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:45.106344   21126 logs.go:279] 0 containers: []
	W0203 15:09:45.106358   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:45.106364   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:45.106371   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:45.144583   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:45.144597   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:45.156471   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:45.156485   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:45.210928   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:45.210943   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:45.210949   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:45.226003   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:45.226016   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:47.275987   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049911931s)
	I0203 15:09:49.776323   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:49.921296   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:49.948265   21126 logs.go:279] 0 containers: []
	W0203 15:09:49.948279   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:49.948346   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:49.971238   21126 logs.go:279] 0 containers: []
	W0203 15:09:49.971253   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:49.971320   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:49.993795   21126 logs.go:279] 0 containers: []
	W0203 15:09:49.993809   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:49.993879   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:50.016864   21126 logs.go:279] 0 containers: []
	W0203 15:09:50.016878   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:50.016948   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:50.039658   21126 logs.go:279] 0 containers: []
	W0203 15:09:50.039673   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:50.039749   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:50.063857   21126 logs.go:279] 0 containers: []
	W0203 15:09:50.063871   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:50.063941   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:50.087428   21126 logs.go:279] 0 containers: []
	W0203 15:09:50.087442   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:50.087509   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:50.110418   21126 logs.go:279] 0 containers: []
	W0203 15:09:50.110431   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:50.110438   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:50.110445   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:50.122414   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:50.122426   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:50.177701   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:50.177716   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:50.177722   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:50.193256   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:50.193269   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:52.243084   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049756615s)
	I0203 15:09:52.243197   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:52.243206   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:54.781588   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:54.922356   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:54.948251   21126 logs.go:279] 0 containers: []
	W0203 15:09:54.948265   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:54.948332   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:54.973046   21126 logs.go:279] 0 containers: []
	W0203 15:09:54.973061   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:54.973137   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:54.997199   21126 logs.go:279] 0 containers: []
	W0203 15:09:54.997214   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:54.997283   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:09:55.021041   21126 logs.go:279] 0 containers: []
	W0203 15:09:55.021053   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:09:55.021119   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:09:55.043944   21126 logs.go:279] 0 containers: []
	W0203 15:09:55.043957   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:09:55.044026   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:09:55.067778   21126 logs.go:279] 0 containers: []
	W0203 15:09:55.067794   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:09:55.067865   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:09:55.092466   21126 logs.go:279] 0 containers: []
	W0203 15:09:55.092478   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:09:55.092549   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:09:55.116745   21126 logs.go:279] 0 containers: []
	W0203 15:09:55.116761   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:09:55.116773   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:09:55.116783   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:09:55.155954   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:09:55.155973   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:09:55.168929   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:09:55.168943   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:09:55.232018   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:09:55.232030   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:09:55.232036   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:09:55.247608   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:09:55.247622   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:09:57.299154   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051473996s)
	I0203 15:09:59.799467   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:09:59.922785   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:09:59.949481   21126 logs.go:279] 0 containers: []
	W0203 15:09:59.949494   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:09:59.949562   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:09:59.972392   21126 logs.go:279] 0 containers: []
	W0203 15:09:59.972406   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:09:59.972474   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:09:59.995627   21126 logs.go:279] 0 containers: []
	W0203 15:09:59.995642   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:09:59.995711   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:00.019327   21126 logs.go:279] 0 containers: []
	W0203 15:10:00.019339   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:00.019409   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:00.043021   21126 logs.go:279] 0 containers: []
	W0203 15:10:00.043035   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:00.043106   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:00.065391   21126 logs.go:279] 0 containers: []
	W0203 15:10:00.065404   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:00.065476   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:00.088585   21126 logs.go:279] 0 containers: []
	W0203 15:10:00.088597   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:00.088663   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:00.110957   21126 logs.go:279] 0 containers: []
	W0203 15:10:00.110969   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:00.110975   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:00.110983   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:00.126473   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:00.126485   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:02.176931   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050386684s)
	I0203 15:10:02.177051   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:02.177059   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:02.214117   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:02.214129   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:02.226729   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:02.226742   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:02.281728   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:04.782032   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:04.923358   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:04.947755   21126 logs.go:279] 0 containers: []
	W0203 15:10:04.947769   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:04.947840   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:04.970528   21126 logs.go:279] 0 containers: []
	W0203 15:10:04.970543   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:04.970613   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:04.992853   21126 logs.go:279] 0 containers: []
	W0203 15:10:04.992866   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:04.992933   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:05.016619   21126 logs.go:279] 0 containers: []
	W0203 15:10:05.016634   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:05.016701   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:05.040528   21126 logs.go:279] 0 containers: []
	W0203 15:10:05.040540   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:05.040608   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:05.064267   21126 logs.go:279] 0 containers: []
	W0203 15:10:05.064284   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:05.064359   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:05.088460   21126 logs.go:279] 0 containers: []
	W0203 15:10:05.088473   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:05.088541   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:05.111715   21126 logs.go:279] 0 containers: []
	W0203 15:10:05.111728   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:05.111736   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:05.111744   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:05.150360   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:05.150373   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:05.162517   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:05.162530   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:05.216036   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:05.216046   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:05.216064   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:05.231481   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:05.231494   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:07.279736   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048184721s)
	I0203 15:10:09.780872   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:09.922899   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:09.947535   21126 logs.go:279] 0 containers: []
	W0203 15:10:09.947548   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:09.947618   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:09.970963   21126 logs.go:279] 0 containers: []
	W0203 15:10:09.970977   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:09.971050   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:09.993689   21126 logs.go:279] 0 containers: []
	W0203 15:10:09.993703   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:09.993776   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:10.016424   21126 logs.go:279] 0 containers: []
	W0203 15:10:10.016436   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:10.016503   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:10.039330   21126 logs.go:279] 0 containers: []
	W0203 15:10:10.039343   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:10.039417   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:10.062143   21126 logs.go:279] 0 containers: []
	W0203 15:10:10.062163   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:10.062237   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:10.085707   21126 logs.go:279] 0 containers: []
	W0203 15:10:10.085721   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:10.085790   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:10.108591   21126 logs.go:279] 0 containers: []
	W0203 15:10:10.108604   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:10.108611   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:10.108617   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:10.147103   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:10.147121   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:10.160657   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:10.160672   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:10.218207   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:10.218218   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:10.218225   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:10.233630   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:10.233644   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:12.284771   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051057356s)
	I0203 15:10:14.787204   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:14.922603   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:14.949493   21126 logs.go:279] 0 containers: []
	W0203 15:10:14.949507   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:14.949575   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:14.973522   21126 logs.go:279] 0 containers: []
	W0203 15:10:14.973535   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:14.973602   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:14.997214   21126 logs.go:279] 0 containers: []
	W0203 15:10:14.997230   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:14.997310   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:15.020954   21126 logs.go:279] 0 containers: []
	W0203 15:10:15.020967   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:15.021037   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:15.045442   21126 logs.go:279] 0 containers: []
	W0203 15:10:15.045455   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:15.045523   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:15.068231   21126 logs.go:279] 0 containers: []
	W0203 15:10:15.068253   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:15.068330   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:15.090750   21126 logs.go:279] 0 containers: []
	W0203 15:10:15.090763   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:15.090833   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:15.113819   21126 logs.go:279] 0 containers: []
	W0203 15:10:15.113831   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:15.113838   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:15.113844   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:15.169411   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:15.169422   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:15.169428   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:15.184622   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:15.184635   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:17.232866   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048171396s)
	I0203 15:10:17.233033   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:17.233040   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:17.271173   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:17.271187   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:19.783765   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:19.922540   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:19.948350   21126 logs.go:279] 0 containers: []
	W0203 15:10:19.948372   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:19.948447   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:19.970900   21126 logs.go:279] 0 containers: []
	W0203 15:10:19.970914   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:19.970987   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:19.994857   21126 logs.go:279] 0 containers: []
	W0203 15:10:19.994871   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:19.994943   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:20.018432   21126 logs.go:279] 0 containers: []
	W0203 15:10:20.018446   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:20.018517   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:20.042287   21126 logs.go:279] 0 containers: []
	W0203 15:10:20.042300   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:20.042380   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:20.066401   21126 logs.go:279] 0 containers: []
	W0203 15:10:20.066415   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:20.066484   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:20.089782   21126 logs.go:279] 0 containers: []
	W0203 15:10:20.089797   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:20.089867   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:20.113127   21126 logs.go:279] 0 containers: []
	W0203 15:10:20.113141   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:20.113149   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:20.113156   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:22.164016   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050793823s)
	I0203 15:10:22.164624   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:22.164639   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:22.201862   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:22.201883   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:22.214169   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:22.214185   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:22.269099   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:22.269111   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:22.269118   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:24.784800   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:24.922688   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:24.948246   21126 logs.go:279] 0 containers: []
	W0203 15:10:24.948259   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:24.948332   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:24.971927   21126 logs.go:279] 0 containers: []
	W0203 15:10:24.971941   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:24.972009   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:24.995968   21126 logs.go:279] 0 containers: []
	W0203 15:10:24.995980   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:24.996050   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:25.020780   21126 logs.go:279] 0 containers: []
	W0203 15:10:25.020793   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:25.020863   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:25.043821   21126 logs.go:279] 0 containers: []
	W0203 15:10:25.043833   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:25.043900   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:25.068125   21126 logs.go:279] 0 containers: []
	W0203 15:10:25.068139   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:25.068216   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:25.091207   21126 logs.go:279] 0 containers: []
	W0203 15:10:25.091222   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:25.091292   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:25.115392   21126 logs.go:279] 0 containers: []
	W0203 15:10:25.115405   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:25.115411   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:25.115420   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:25.153879   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:25.153898   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:25.166490   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:25.166505   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:25.229701   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:25.229716   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:25.229722   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:25.244922   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:25.244937   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:27.294709   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049713919s)
	I0203 15:10:29.795006   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:29.923060   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:29.949301   21126 logs.go:279] 0 containers: []
	W0203 15:10:29.949314   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:29.949383   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:29.972749   21126 logs.go:279] 0 containers: []
	W0203 15:10:29.972762   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:29.972830   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:29.995388   21126 logs.go:279] 0 containers: []
	W0203 15:10:29.995402   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:29.995476   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:30.019759   21126 logs.go:279] 0 containers: []
	W0203 15:10:30.019773   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:30.019844   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:30.043125   21126 logs.go:279] 0 containers: []
	W0203 15:10:30.043140   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:30.043211   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:30.066125   21126 logs.go:279] 0 containers: []
	W0203 15:10:30.066142   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:30.066212   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:30.089591   21126 logs.go:279] 0 containers: []
	W0203 15:10:30.089605   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:30.089675   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:30.113257   21126 logs.go:279] 0 containers: []
	W0203 15:10:30.113270   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:30.113276   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:30.113283   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:30.150308   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:30.150322   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:30.162339   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:30.162353   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:30.216877   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:30.216891   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:30.216897   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:30.232269   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:30.232282   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:32.284142   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05180267s)
	I0203 15:10:34.784992   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:34.922999   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:34.947161   21126 logs.go:279] 0 containers: []
	W0203 15:10:34.947175   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:34.947241   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:34.970514   21126 logs.go:279] 0 containers: []
	W0203 15:10:34.970529   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:34.970597   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:34.994157   21126 logs.go:279] 0 containers: []
	W0203 15:10:34.994170   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:34.994238   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:35.017531   21126 logs.go:279] 0 containers: []
	W0203 15:10:35.017544   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:35.017612   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:35.040150   21126 logs.go:279] 0 containers: []
	W0203 15:10:35.040162   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:35.040228   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:35.064306   21126 logs.go:279] 0 containers: []
	W0203 15:10:35.064320   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:35.064389   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:35.088072   21126 logs.go:279] 0 containers: []
	W0203 15:10:35.088085   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:35.088154   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:35.112118   21126 logs.go:279] 0 containers: []
	W0203 15:10:35.112132   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:35.112140   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:35.112148   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:35.124329   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:35.124345   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:35.179829   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:35.179841   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:35.179849   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:35.195238   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:35.195251   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:37.242502   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04719225s)
	I0203 15:10:37.242613   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:37.242620   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:39.780624   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:39.922625   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:39.947444   21126 logs.go:279] 0 containers: []
	W0203 15:10:39.947458   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:39.947531   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:39.970937   21126 logs.go:279] 0 containers: []
	W0203 15:10:39.970951   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:39.971019   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:39.994823   21126 logs.go:279] 0 containers: []
	W0203 15:10:39.994844   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:39.994923   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:40.018612   21126 logs.go:279] 0 containers: []
	W0203 15:10:40.018626   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:40.018695   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:40.042470   21126 logs.go:279] 0 containers: []
	W0203 15:10:40.042482   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:40.042549   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:40.064903   21126 logs.go:279] 0 containers: []
	W0203 15:10:40.064916   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:40.064978   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:40.087916   21126 logs.go:279] 0 containers: []
	W0203 15:10:40.087930   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:40.088000   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:40.111686   21126 logs.go:279] 0 containers: []
	W0203 15:10:40.111700   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:40.111707   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:40.111714   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:40.150018   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:40.150037   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:40.163079   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:40.163094   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:40.217919   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:40.217932   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:40.217938   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:40.233471   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:40.233484   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:42.284546   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051003207s)
	I0203 15:10:44.784894   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:44.923080   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:44.948311   21126 logs.go:279] 0 containers: []
	W0203 15:10:44.948324   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:44.948393   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:44.971703   21126 logs.go:279] 0 containers: []
	W0203 15:10:44.971716   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:44.971782   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:44.995369   21126 logs.go:279] 0 containers: []
	W0203 15:10:44.995382   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:44.995448   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:45.017752   21126 logs.go:279] 0 containers: []
	W0203 15:10:45.017765   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:45.017830   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:45.041649   21126 logs.go:279] 0 containers: []
	W0203 15:10:45.041664   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:45.041734   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:45.064829   21126 logs.go:279] 0 containers: []
	W0203 15:10:45.064842   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:45.064908   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:45.087393   21126 logs.go:279] 0 containers: []
	W0203 15:10:45.087407   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:45.087496   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:45.110442   21126 logs.go:279] 0 containers: []
	W0203 15:10:45.110454   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:45.110461   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:45.110469   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:45.149094   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:45.149110   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:45.161547   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:45.161562   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:45.216472   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:45.216487   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:45.216493   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:45.231814   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:45.231828   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:47.282299   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050413699s)
	I0203 15:10:49.782640   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:49.923442   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:49.948004   21126 logs.go:279] 0 containers: []
	W0203 15:10:49.948022   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:49.948102   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:49.971458   21126 logs.go:279] 0 containers: []
	W0203 15:10:49.971470   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:49.971540   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:49.995184   21126 logs.go:279] 0 containers: []
	W0203 15:10:49.995196   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:49.995262   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:50.017313   21126 logs.go:279] 0 containers: []
	W0203 15:10:50.017326   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:50.017396   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:50.040609   21126 logs.go:279] 0 containers: []
	W0203 15:10:50.040622   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:50.040688   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:50.064497   21126 logs.go:279] 0 containers: []
	W0203 15:10:50.064513   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:50.064585   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:50.087427   21126 logs.go:279] 0 containers: []
	W0203 15:10:50.087440   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:50.087505   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:50.110503   21126 logs.go:279] 0 containers: []
	W0203 15:10:50.110514   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:50.110521   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:50.110528   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:50.122219   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:50.122232   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:50.177524   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:50.177536   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:50.177544   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:50.192984   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:50.192997   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:52.240084   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047025728s)
	I0203 15:10:52.240202   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:52.240211   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:54.777287   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:54.922867   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:54.947596   21126 logs.go:279] 0 containers: []
	W0203 15:10:54.947608   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:54.947681   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:54.972708   21126 logs.go:279] 0 containers: []
	W0203 15:10:54.972721   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:54.972789   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:54.995466   21126 logs.go:279] 0 containers: []
	W0203 15:10:54.995479   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:54.995544   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:10:55.019071   21126 logs.go:279] 0 containers: []
	W0203 15:10:55.019083   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:10:55.019166   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:10:55.041844   21126 logs.go:279] 0 containers: []
	W0203 15:10:55.041858   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:10:55.041927   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:10:55.065570   21126 logs.go:279] 0 containers: []
	W0203 15:10:55.065582   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:10:55.065647   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:10:55.088903   21126 logs.go:279] 0 containers: []
	W0203 15:10:55.088916   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:10:55.088983   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:10:55.112555   21126 logs.go:279] 0 containers: []
	W0203 15:10:55.112568   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:10:55.112574   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:10:55.112582   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:10:55.151748   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:10:55.151780   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:10:55.164813   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:10:55.164828   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:10:55.222586   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:10:55.222598   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:10:55.222604   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:10:55.238170   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:10:55.238183   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:10:57.289559   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05131791s)
	I0203 15:10:59.790851   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:10:59.924942   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:10:59.950633   21126 logs.go:279] 0 containers: []
	W0203 15:10:59.950646   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:10:59.950717   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:10:59.974253   21126 logs.go:279] 0 containers: []
	W0203 15:10:59.974268   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:10:59.974341   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:10:59.997032   21126 logs.go:279] 0 containers: []
	W0203 15:10:59.997045   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:10:59.997111   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:11:00.021212   21126 logs.go:279] 0 containers: []
	W0203 15:11:00.021229   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:11:00.021311   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:11:00.043527   21126 logs.go:279] 0 containers: []
	W0203 15:11:00.043542   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:11:00.043609   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:11:00.066879   21126 logs.go:279] 0 containers: []
	W0203 15:11:00.066892   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:11:00.066958   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:11:00.090164   21126 logs.go:279] 0 containers: []
	W0203 15:11:00.090176   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:11:00.090245   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:11:00.113192   21126 logs.go:279] 0 containers: []
	W0203 15:11:00.113206   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:11:00.113215   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:11:00.113225   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:11:00.168472   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:11:00.168483   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:11:00.168489   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:11:00.184059   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:11:00.184073   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:11:02.235228   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051096957s)
	I0203 15:11:02.235357   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:11:02.235367   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:11:02.272191   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:11:02.272204   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:11:04.785329   21126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:11:04.923394   21126 kubeadm.go:637] restartCluster took 4m11.364356336s
	W0203 15:11:04.923508   21126 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0203 15:11:04.923527   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0203 15:11:05.340274   21126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 15:11:05.350238   21126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 15:11:05.358172   21126 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 15:11:05.358223   21126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 15:11:05.365833   21126 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 15:11:05.365857   21126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 15:11:05.414561   21126 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0203 15:11:05.414601   21126 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 15:11:05.715362   21126 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 15:11:05.715439   21126 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 15:11:05.715516   21126 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 15:11:05.943211   21126 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 15:11:05.943939   21126 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 15:11:05.950554   21126 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0203 15:11:06.032571   21126 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 15:11:06.054245   21126 out.go:204]   - Generating certificates and keys ...
	I0203 15:11:06.054352   21126 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 15:11:06.054420   21126 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 15:11:06.054480   21126 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 15:11:06.054522   21126 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0203 15:11:06.054579   21126 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 15:11:06.054646   21126 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0203 15:11:06.054714   21126 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0203 15:11:06.054771   21126 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0203 15:11:06.054823   21126 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 15:11:06.054911   21126 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 15:11:06.054954   21126 kubeadm.go:322] [certs] Using the existing "sa" key
	I0203 15:11:06.055018   21126 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 15:11:06.126709   21126 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 15:11:06.188982   21126 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 15:11:06.435726   21126 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 15:11:06.685245   21126 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 15:11:06.685871   21126 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 15:11:06.707580   21126 out.go:204]   - Booting up control plane ...
	I0203 15:11:06.707698   21126 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 15:11:06.707779   21126 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 15:11:06.707862   21126 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 15:11:06.707957   21126 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 15:11:06.708135   21126 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 15:11:46.697120   21126 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 15:11:46.697991   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:11:46.698194   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:11:51.699425   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:11:51.699665   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:12:01.702126   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:12:01.702356   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:12:21.702987   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:12:21.703194   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:13:01.705120   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:13:01.705332   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:13:01.705359   21126 kubeadm.go:322] 
	I0203 15:13:01.705411   21126 kubeadm.go:322] Unfortunately, an error has occurred:
	I0203 15:13:01.705457   21126 kubeadm.go:322] 	timed out waiting for the condition
	I0203 15:13:01.705465   21126 kubeadm.go:322] 
	I0203 15:13:01.705518   21126 kubeadm.go:322] This error is likely caused by:
	I0203 15:13:01.705566   21126 kubeadm.go:322] 	- The kubelet is not running
	I0203 15:13:01.705676   21126 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 15:13:01.705683   21126 kubeadm.go:322] 
	I0203 15:13:01.705756   21126 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 15:13:01.705781   21126 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0203 15:13:01.705803   21126 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0203 15:13:01.705808   21126 kubeadm.go:322] 
	I0203 15:13:01.705881   21126 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 15:13:01.705962   21126 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0203 15:13:01.706038   21126 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0203 15:13:01.706075   21126 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0203 15:13:01.706123   21126 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0203 15:13:01.706145   21126 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0203 15:13:01.709265   21126 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 15:13:01.709347   21126 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 15:13:01.709442   21126 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0203 15:13:01.709521   21126 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 15:13:01.709582   21126 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 15:13:01.709639   21126 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0203 15:13:01.709780   21126 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0203 15:13:01.709806   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0203 15:13:02.127437   21126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 15:13:02.137423   21126 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 15:13:02.137484   21126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 15:13:02.145127   21126 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 15:13:02.145145   21126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 15:13:02.193501   21126 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0203 15:13:02.193552   21126 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 15:13:02.497013   21126 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 15:13:02.497133   21126 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 15:13:02.497204   21126 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 15:13:02.724132   21126 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 15:13:02.724874   21126 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 15:13:02.731555   21126 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0203 15:13:02.801852   21126 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 15:13:02.823276   21126 out.go:204]   - Generating certificates and keys ...
	I0203 15:13:02.823350   21126 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 15:13:02.823418   21126 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 15:13:02.823511   21126 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 15:13:02.823555   21126 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0203 15:13:02.823605   21126 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 15:13:02.823675   21126 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0203 15:13:02.823726   21126 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0203 15:13:02.823773   21126 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0203 15:13:02.823829   21126 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 15:13:02.823878   21126 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 15:13:02.823903   21126 kubeadm.go:322] [certs] Using the existing "sa" key
	I0203 15:13:02.823949   21126 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 15:13:03.141635   21126 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 15:13:03.234989   21126 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 15:13:03.414460   21126 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 15:13:03.718514   21126 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 15:13:03.719019   21126 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 15:13:03.740598   21126 out.go:204]   - Booting up control plane ...
	I0203 15:13:03.740764   21126 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 15:13:03.740891   21126 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 15:13:03.740991   21126 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 15:13:03.741105   21126 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 15:13:03.741348   21126 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 15:13:43.728263   21126 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 15:13:43.728728   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:13:43.728878   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:13:48.730661   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:13:48.730882   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:13:58.731472   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:13:58.731677   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:14:18.732332   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:14:18.732504   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:14:58.734150   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:14:58.734308   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:14:58.734321   21126 kubeadm.go:322] 
	I0203 15:14:58.734347   21126 kubeadm.go:322] Unfortunately, an error has occurred:
	I0203 15:14:58.734373   21126 kubeadm.go:322] 	timed out waiting for the condition
	I0203 15:14:58.734377   21126 kubeadm.go:322] 
	I0203 15:14:58.734399   21126 kubeadm.go:322] This error is likely caused by:
	I0203 15:14:58.734423   21126 kubeadm.go:322] 	- The kubelet is not running
	I0203 15:14:58.734491   21126 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 15:14:58.734496   21126 kubeadm.go:322] 
	I0203 15:14:58.734576   21126 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 15:14:58.734609   21126 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0203 15:14:58.734634   21126 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0203 15:14:58.734638   21126 kubeadm.go:322] 
	I0203 15:14:58.734728   21126 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 15:14:58.734804   21126 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0203 15:14:58.734872   21126 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0203 15:14:58.734918   21126 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0203 15:14:58.734971   21126 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0203 15:14:58.734995   21126 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0203 15:14:58.738313   21126 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 15:14:58.738411   21126 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 15:14:58.738538   21126 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0203 15:14:58.738620   21126 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 15:14:58.738738   21126 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 15:14:58.738798   21126 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0203 15:14:58.738850   21126 kubeadm.go:403] StartCluster complete in 8m5.206203453s
	I0203 15:14:58.738943   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:14:58.762138   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.762150   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:14:58.762220   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:14:58.786353   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.786367   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:14:58.786448   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:14:58.811734   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.811747   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:14:58.811820   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:14:58.834722   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.834736   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:14:58.834805   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:14:58.858239   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.858253   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:14:58.858323   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:14:58.882446   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.882458   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:14:58.882525   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:14:58.906762   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.906776   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:14:58.906842   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:14:58.931071   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.931085   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:14:58.931093   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:14:58.931100   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:14:58.968187   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:14:58.968203   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:14:58.980491   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:14:58.980504   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:14:59.034723   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:14:59.034737   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:14:59.034744   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:14:59.050311   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:14:59.050325   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:15:01.100139   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049755261s)
	W0203 15:15:01.100252   21126 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 15:15:01.100270   21126 out.go:239] * 
	* 
	W0203 15:15:01.100389   21126 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 15:15:01.100415   21126 out.go:239] * 
	* 
	W0203 15:15:01.101097   21126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 15:15:01.185870   21126 out.go:177] 
	W0203 15:15:01.228585   21126 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 15:15:01.228653   21126 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 15:15:01.228686   21126 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 15:15:01.249636   21126 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-136000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-136000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-136000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423",
	        "Created": "2023-02-03T23:01:11.889189264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T23:06:49.643869127Z",
	            "FinishedAt": "2023-02-03T23:06:46.709273842Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hostname",
	        "HostsPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hosts",
	        "LogPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423-json.log",
	        "Name": "/old-k8s-version-136000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b43f019aed40f7f6d26e5fc19850e1e26591afe1aebb383bfc62a7e02b87e1da",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55352"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55353"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55356"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b43f019aed40",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "845795d4cf37",
	                        "old-k8s-version-136000"
	                    ],
	                    "NetworkID": "a4c82c2a3592223db620bf95332091613324019646bbe58152af123c5085aba4",
	                    "EndpointID": "9d19243bdc4b0034b95a676b71e1e9f6a1d25ba7078faa4d4b80def87e2b6889",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 2 (406.579435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-136000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-136000 logs -n 25: (3.565080641s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p false-292000 sudo                              | false-292000           | jenkins | v1.29.0 | 03 Feb 23 15:01 PST | 03 Feb 23 15:01 PST |
	|         | containerd config dump                            |                        |         |         |                     |                     |
	| ssh     | -p false-292000 sudo systemctl                    | false-292000           | jenkins | v1.29.0 | 03 Feb 23 15:01 PST |                     |
	|         | status crio --all --full                          |                        |         |         |                     |                     |
	|         | --no-pager                                        |                        |         |         |                     |                     |
	| ssh     | -p false-292000 sudo systemctl                    | false-292000           | jenkins | v1.29.0 | 03 Feb 23 15:01 PST | 03 Feb 23 15:01 PST |
	|         | cat crio --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p false-292000 sudo find                         | false-292000           | jenkins | v1.29.0 | 03 Feb 23 15:01 PST | 03 Feb 23 15:01 PST |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |         |                     |                     |
	| ssh     | -p false-292000 sudo crio                         | false-292000           | jenkins | v1.29.0 | 03 Feb 23 15:01 PST | 03 Feb 23 15:01 PST |
	|         | config                                            |                        |         |         |                     |                     |
	| delete  | -p false-292000                                   | false-292000           | jenkins | v1.29.0 | 03 Feb 23 15:01 PST | 03 Feb 23 15:01 PST |
	| start   | -p no-preload-520000                              | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:01 PST | 03 Feb 23 15:02 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-520000        | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:02 PST | 03 Feb 23 15:02 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-520000                              | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:02 PST | 03 Feb 23 15:02 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-520000             | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:02 PST | 03 Feb 23 15:02 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-520000                              | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:02 PST | 03 Feb 23 15:12 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-136000   | old-k8s-version-136000 | jenkins | v1.29.0 | 03 Feb 23 15:05 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-136000                         | old-k8s-version-136000 | jenkins | v1.29.0 | 03 Feb 23 15:06 PST | 03 Feb 23 15:06 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-136000        | old-k8s-version-136000 | jenkins | v1.29.0 | 03 Feb 23 15:06 PST | 03 Feb 23 15:06 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-136000                         | old-k8s-version-136000 | jenkins | v1.29.0 | 03 Feb 23 15:06 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-520000 sudo                         | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-520000                              | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-520000                              | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-520000                              | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	| delete  | -p no-preload-520000                              | no-preload-520000      | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	| start   | -p embed-certs-913000                             | embed-certs-913000     | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:13 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-913000       | embed-certs-913000     | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-913000                             | embed-certs-913000     | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-913000            | embed-certs-913000     | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-913000                             | embed-certs-913000     | jenkins | v1.29.0 | 03 Feb 23 15:13 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 15:13:40
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 15:13:40.889624   21936 out.go:296] Setting OutFile to fd 1 ...
	I0203 15:13:40.889786   21936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:13:40.889791   21936 out.go:309] Setting ErrFile to fd 2...
	I0203 15:13:40.889795   21936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:13:40.889899   21936 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 15:13:40.890359   21936 out.go:303] Setting JSON to false
	I0203 15:13:40.908634   21936 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":4395,"bootTime":1675461625,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 15:13:40.908733   21936 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 15:13:40.931148   21936 out.go:177] * [embed-certs-913000] minikube v1.29.0 on Darwin 13.2
	I0203 15:13:40.974689   21936 notify.go:220] Checking for updates...
	I0203 15:13:40.996561   21936 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 15:13:41.017816   21936 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:13:41.060518   21936 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 15:13:41.103531   21936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 15:13:41.149536   21936 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 15:13:41.171903   21936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 15:13:41.194445   21936 config.go:180] Loaded profile config "embed-certs-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 15:13:41.195133   21936 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 15:13:41.256384   21936 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 15:13:41.256511   21936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:13:41.400215   21936 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:13:41.307325263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:13:41.442860   21936 out.go:177] * Using the docker driver based on existing profile
	I0203 15:13:41.466067   21936 start.go:296] selected driver: docker
	I0203 15:13:41.466098   21936 start.go:857] validating driver "docker" against &{Name:embed-certs-913000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-913000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:13:41.466220   21936 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 15:13:41.469468   21936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:13:41.609749   21936 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:13:41.518890119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:13:41.609898   21936 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 15:13:41.609916   21936 cni.go:84] Creating CNI manager for ""
	I0203 15:13:41.609928   21936 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 15:13:41.609939   21936 start_flags.go:319] config:
	{Name:embed-certs-913000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-913000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:13:41.631945   21936 out.go:177] * Starting control plane node embed-certs-913000 in cluster embed-certs-913000
	I0203 15:13:41.653794   21936 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 15:13:41.675573   21936 out.go:177] * Pulling base image ...
	I0203 15:13:41.696598   21936 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 15:13:41.696667   21936 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 15:13:41.696690   21936 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0203 15:13:41.696709   21936 cache.go:57] Caching tarball of preloaded images
	I0203 15:13:41.696921   21936 preload.go:174] Found /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 15:13:41.696945   21936 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0203 15:13:41.697975   21936 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/config.json ...
	I0203 15:13:41.753540   21936 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 15:13:41.753558   21936 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 15:13:41.753583   21936 cache.go:193] Successfully downloaded all kic artifacts
	I0203 15:13:41.753623   21936 start.go:364] acquiring machines lock for embed-certs-913000: {Name:mk3c5271cac8a01cc6377cd938202ff26f174a70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 15:13:41.753706   21936 start.go:368] acquired machines lock for "embed-certs-913000" in 64.386µs
	I0203 15:13:41.753728   21936 start.go:96] Skipping create...Using existing machine configuration
	I0203 15:13:41.753738   21936 fix.go:55] fixHost starting: 
	I0203 15:13:41.754003   21936 cli_runner.go:164] Run: docker container inspect embed-certs-913000 --format={{.State.Status}}
	I0203 15:13:41.812284   21936 fix.go:103] recreateIfNeeded on embed-certs-913000: state=Stopped err=<nil>
	W0203 15:13:41.812314   21936 fix.go:129] unexpected machine state, will restart: <nil>
	I0203 15:13:41.834427   21936 out.go:177] * Restarting existing docker container for "embed-certs-913000" ...
	I0203 15:13:41.856280   21936 cli_runner.go:164] Run: docker start embed-certs-913000
	I0203 15:13:42.193090   21936 cli_runner.go:164] Run: docker container inspect embed-certs-913000 --format={{.State.Status}}
	I0203 15:13:42.251929   21936 kic.go:426] container "embed-certs-913000" state is running.
	I0203 15:13:42.252538   21936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-913000
	I0203 15:13:42.316009   21936 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/config.json ...
	I0203 15:13:42.316586   21936 machine.go:88] provisioning docker machine ...
	I0203 15:13:42.316634   21936 ubuntu.go:169] provisioning hostname "embed-certs-913000"
	I0203 15:13:42.316717   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:42.385503   21936 main.go:141] libmachine: Using SSH client type: native
	I0203 15:13:42.385709   21936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55532 <nil> <nil>}
	I0203 15:13:42.385725   21936 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-913000 && echo "embed-certs-913000" | sudo tee /etc/hostname
	I0203 15:13:42.520598   21936 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-913000
	
	I0203 15:13:42.520683   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:42.581041   21936 main.go:141] libmachine: Using SSH client type: native
	I0203 15:13:42.581182   21936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55532 <nil> <nil>}
	I0203 15:13:42.581197   21936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-913000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-913000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-913000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 15:13:42.712322   21936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:13:42.712346   21936 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
	I0203 15:13:42.712377   21936 ubuntu.go:177] setting up certificates
	I0203 15:13:42.712387   21936 provision.go:83] configureAuth start
	I0203 15:13:42.712461   21936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-913000
	I0203 15:13:42.769368   21936 provision.go:138] copyHostCerts
	I0203 15:13:42.769465   21936 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
	I0203 15:13:42.769476   21936 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 15:13:42.769583   21936 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
	I0203 15:13:42.769801   21936 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
	I0203 15:13:42.769810   21936 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 15:13:42.769869   21936 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
	I0203 15:13:42.770018   21936 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
	I0203 15:13:42.770024   21936 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 15:13:42.770080   21936 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
	I0203 15:13:42.770203   21936 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.embed-certs-913000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-913000]
	I0203 15:13:43.101895   21936 provision.go:172] copyRemoteCerts
	I0203 15:13:43.101957   21936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 15:13:43.102006   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:43.159411   21936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55532 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/embed-certs-913000/id_rsa Username:docker}
	I0203 15:13:43.252813   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 15:13:43.270027   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0203 15:13:43.287433   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0203 15:13:43.304628   21936 provision.go:86] duration metric: configureAuth took 592.214785ms
	I0203 15:13:43.304643   21936 ubuntu.go:193] setting minikube options for container-runtime
	I0203 15:13:43.304794   21936 config.go:180] Loaded profile config "embed-certs-913000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 15:13:43.304856   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:43.361981   21936 main.go:141] libmachine: Using SSH client type: native
	I0203 15:13:43.362150   21936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55532 <nil> <nil>}
	I0203 15:13:43.362162   21936 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 15:13:43.491684   21936 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 15:13:43.491697   21936 ubuntu.go:71] root file system type: overlay
	I0203 15:13:43.491814   21936 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 15:13:43.491900   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:43.548348   21936 main.go:141] libmachine: Using SSH client type: native
	I0203 15:13:43.548495   21936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55532 <nil> <nil>}
	I0203 15:13:43.548558   21936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 15:13:43.682326   21936 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 15:13:43.682403   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:43.740227   21936 main.go:141] libmachine: Using SSH client type: native
	I0203 15:13:43.740383   21936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55532 <nil> <nil>}
	I0203 15:13:43.740396   21936 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 15:13:43.874170   21936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:13:43.874186   21936 machine.go:91] provisioned docker machine in 1.557557216s
	I0203 15:13:43.874193   21936 start.go:300] post-start starting for "embed-certs-913000" (driver="docker")
	I0203 15:13:43.874198   21936 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 15:13:43.874277   21936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 15:13:43.874333   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:43.931952   21936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55532 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/embed-certs-913000/id_rsa Username:docker}
	I0203 15:13:44.023188   21936 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 15:13:44.026709   21936 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 15:13:44.026727   21936 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 15:13:44.026737   21936 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 15:13:44.026743   21936 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 15:13:44.026751   21936 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
	I0203 15:13:44.026838   21936 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
	I0203 15:13:44.026995   21936 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
	I0203 15:13:44.027165   21936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 15:13:44.034646   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:13:44.051759   21936 start.go:303] post-start completed in 177.550707ms
	I0203 15:13:44.051845   21936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 15:13:44.051900   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:44.108360   21936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55532 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/embed-certs-913000/id_rsa Username:docker}
	I0203 15:13:44.196124   21936 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 15:13:44.200697   21936 fix.go:57] fixHost completed within 2.44690473s
	I0203 15:13:44.200709   21936 start.go:83] releasing machines lock for "embed-certs-913000", held for 2.446941157s
	I0203 15:13:44.200788   21936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-913000
	I0203 15:13:44.257914   21936 ssh_runner.go:195] Run: cat /version.json
	I0203 15:13:44.257938   21936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 15:13:44.257993   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:44.258032   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:44.318243   21936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55532 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/embed-certs-913000/id_rsa Username:docker}
	I0203 15:13:44.318341   21936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55532 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/embed-certs-913000/id_rsa Username:docker}
	I0203 15:13:44.466148   21936 ssh_runner.go:195] Run: systemctl --version
	I0203 15:13:44.470850   21936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 15:13:44.476021   21936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 15:13:44.491570   21936 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 15:13:44.491692   21936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 15:13:44.499149   21936 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0203 15:13:44.511796   21936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 15:13:44.519327   21936 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0203 15:13:44.519343   21936 start.go:483] detecting cgroup driver to use...
	I0203 15:13:44.519354   21936 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:13:44.519440   21936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:13:44.532724   21936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0203 15:13:44.541532   21936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 15:13:44.550308   21936 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 15:13:44.550365   21936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 15:13:44.559005   21936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:13:44.567582   21936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 15:13:44.576083   21936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:13:44.584511   21936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 15:13:44.592466   21936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 15:13:44.601067   21936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 15:13:44.608730   21936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 15:13:44.615887   21936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:13:44.684928   21936 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 15:13:44.755984   21936 start.go:483] detecting cgroup driver to use...
	I0203 15:13:44.756006   21936 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:13:44.756068   21936 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 15:13:44.768648   21936 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 15:13:44.768719   21936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 15:13:44.778762   21936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:13:44.793814   21936 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 15:13:44.897215   21936 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 15:13:44.964604   21936 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 15:13:44.964619   21936 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 15:13:45.002335   21936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:13:45.095667   21936 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 15:13:45.351447   21936 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 15:13:45.419303   21936 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 15:13:45.493932   21936 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 15:13:45.563931   21936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:13:45.632993   21936 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 15:13:45.654879   21936 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 15:13:45.654979   21936 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 15:13:45.659033   21936 start.go:551] Will wait 60s for crictl version
	I0203 15:13:45.659078   21936 ssh_runner.go:195] Run: which crictl
	I0203 15:13:45.662776   21936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 15:13:45.769018   21936 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0203 15:13:45.769102   21936 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:13:45.800522   21936 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:13:45.854114   21936 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0203 15:13:45.854299   21936 cli_runner.go:164] Run: docker exec -t embed-certs-913000 dig +short host.docker.internal
	I0203 15:13:43.728263   21126 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0203 15:13:43.728728   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:13:43.728878   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:13:45.967471   21936 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0203 15:13:45.967575   21936 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0203 15:13:45.972093   21936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:13:45.982256   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:46.040441   21936 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 15:13:46.040514   21936 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 15:13:46.065830   21936 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0203 15:13:46.065847   21936 docker.go:560] Images already preloaded, skipping extraction
	I0203 15:13:46.065922   21936 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 15:13:46.090409   21936 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0203 15:13:46.090431   21936 cache_images.go:84] Images are preloaded, skipping loading
	I0203 15:13:46.090514   21936 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 15:13:46.161000   21936 cni.go:84] Creating CNI manager for ""
	I0203 15:13:46.161029   21936 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 15:13:46.161052   21936 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 15:13:46.161069   21936 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-913000 NodeName:embed-certs-913000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 15:13:46.161186   21936 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-913000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 15:13:46.161268   21936 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-913000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-913000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0203 15:13:46.161332   21936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0203 15:13:46.169426   21936 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 15:13:46.169486   21936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 15:13:46.176903   21936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0203 15:13:46.189895   21936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 15:13:46.202829   21936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0203 15:13:46.215727   21936 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0203 15:13:46.219714   21936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:13:46.229694   21936 certs.go:56] Setting up /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000 for IP: 192.168.76.2
	I0203 15:13:46.229714   21936 certs.go:186] acquiring lock for shared ca certs: {Name:mkdec04c6cc16ac0dcab0ae849b602e6c1942576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:13:46.229884   21936 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key
	I0203 15:13:46.229938   21936 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key
	I0203 15:13:46.230029   21936 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/client.key
	I0203 15:13:46.230095   21936 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/apiserver.key.31bdca25
	I0203 15:13:46.230148   21936 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/proxy-client.key
	I0203 15:13:46.230345   21936 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem (1338 bytes)
	W0203 15:13:46.230384   21936 certs.go:397] ignoring /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568_empty.pem, impossibly tiny 0 bytes
	I0203 15:13:46.230394   21936 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem (1675 bytes)
	I0203 15:13:46.230426   21936 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem (1078 bytes)
	I0203 15:13:46.230461   21936 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem (1123 bytes)
	I0203 15:13:46.230492   21936 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem (1675 bytes)
	I0203 15:13:46.230567   21936 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:13:46.231121   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 15:13:46.248829   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0203 15:13:46.266252   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 15:13:46.283675   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/embed-certs-913000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0203 15:13:46.301091   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 15:13:46.318168   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 15:13:46.335903   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 15:13:46.353409   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 15:13:46.370832   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /usr/share/ca-certificates/25682.pem (1708 bytes)
	I0203 15:13:46.387991   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 15:13:46.405144   21936 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem --> /usr/share/ca-certificates/2568.pem (1338 bytes)
	I0203 15:13:46.422532   21936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 15:13:46.435589   21936 ssh_runner.go:195] Run: openssl version
	I0203 15:13:46.441263   21936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2568.pem && ln -fs /usr/share/ca-certificates/2568.pem /etc/ssl/certs/2568.pem"
	I0203 15:13:46.449621   21936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2568.pem
	I0203 15:13:46.453612   21936 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:13 /usr/share/ca-certificates/2568.pem
	I0203 15:13:46.453656   21936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2568.pem
	I0203 15:13:46.458953   21936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2568.pem /etc/ssl/certs/51391683.0"
	I0203 15:13:46.466497   21936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25682.pem && ln -fs /usr/share/ca-certificates/25682.pem /etc/ssl/certs/25682.pem"
	I0203 15:13:46.474600   21936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25682.pem
	I0203 15:13:46.478631   21936 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:13 /usr/share/ca-certificates/25682.pem
	I0203 15:13:46.478679   21936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25682.pem
	I0203 15:13:46.484270   21936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25682.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 15:13:46.491814   21936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 15:13:46.499943   21936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:13:46.503953   21936 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:13:46.504008   21936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:13:46.509331   21936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 15:13:46.516870   21936 kubeadm.go:401] StartCluster: {Name:embed-certs-913000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-913000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:13:46.516988   21936 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 15:13:46.541676   21936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 15:13:46.549711   21936 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0203 15:13:46.549731   21936 kubeadm.go:633] restartCluster start
	I0203 15:13:46.549791   21936 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 15:13:46.557750   21936 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:46.557830   21936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-913000
	I0203 15:13:46.616843   21936 kubeconfig.go:135] verify returned: extract IP: "embed-certs-913000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:13:46.617009   21936 kubeconfig.go:146] "embed-certs-913000" context is missing from /Users/jenkins/minikube-integration/15770-1719/kubeconfig - will repair!
	I0203 15:13:46.617307   21936 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/kubeconfig: {Name:mkf113f45b09a6304f4248a99f0e16d42a3468fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:13:46.618681   21936 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 15:13:46.627081   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:46.627153   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:46.636867   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:47.137848   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:47.137984   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:47.148954   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:47.639021   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:47.639188   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:47.650960   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:48.137344   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:48.137423   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:48.147125   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:48.638530   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:48.638680   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:48.649871   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:49.138433   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:49.138625   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:49.149850   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:49.637116   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:49.637196   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:49.646767   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:50.137116   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:50.137284   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:50.147879   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:50.639098   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:50.639278   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:50.650514   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:48.730661   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:13:48.730882   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:13:51.137120   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:51.137193   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:51.146901   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:51.638938   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:51.639086   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:51.650180   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:52.139114   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:52.139264   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:52.150445   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:52.637114   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:52.637211   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:52.646838   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:53.138320   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:53.138433   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:53.149486   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:53.637187   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:53.637385   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:53.648223   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:54.137125   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:54.137198   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:54.146557   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:54.639192   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:54.639455   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:54.650447   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:55.139205   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:55.139349   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:55.150489   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:55.637139   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:55.637257   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:55.646785   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:56.139150   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:56.139346   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:56.150529   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:56.637279   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:56.637398   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:56.648657   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:56.648667   21936 api_server.go:165] Checking apiserver status ...
	I0203 15:13:56.648716   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:13:56.657167   21936 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:56.657178   21936 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0203 15:13:56.657186   21936 kubeadm.go:1120] stopping kube-system containers ...
	I0203 15:13:56.657255   21936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 15:13:56.682637   21936 docker.go:456] Stopping containers: [6db074931160 335d6face3ca f04be299b1c9 2681595636b8 261af1bd57dd f085d463fe6a 5c5486ffac89 427631a949fa d1f8d6057d5a ded6c955e4fe dbb3cca07fef 17de30252f8f 78a179d54de7 7bdf0e6dd1ba 0fe60d7e884c 6d3aedb9d117]
	I0203 15:13:56.682722   21936 ssh_runner.go:195] Run: docker stop 6db074931160 335d6face3ca f04be299b1c9 2681595636b8 261af1bd57dd f085d463fe6a 5c5486ffac89 427631a949fa d1f8d6057d5a ded6c955e4fe dbb3cca07fef 17de30252f8f 78a179d54de7 7bdf0e6dd1ba 0fe60d7e884c 6d3aedb9d117
	I0203 15:13:56.707669   21936 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 15:13:56.718381   21936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 15:13:56.726209   21936 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Feb  3 23:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb  3 23:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Feb  3 23:12 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb  3 23:12 /etc/kubernetes/scheduler.conf
	
	I0203 15:13:56.726278   21936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0203 15:13:56.734009   21936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0203 15:13:56.741631   21936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0203 15:13:56.748930   21936 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:56.748982   21936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 15:13:56.756363   21936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0203 15:13:56.763969   21936 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:13:56.764027   21936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 15:13:56.771336   21936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 15:13:56.778827   21936 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0203 15:13:56.778839   21936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:13:56.833248   21936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:13:57.533870   21936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:13:57.661809   21936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:13:57.720280   21936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:13:57.818961   21936 api_server.go:51] waiting for apiserver process to appear ...
	I0203 15:13:57.819027   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:13:58.328562   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:13:58.828513   21936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:13:58.896190   21936 api_server.go:71] duration metric: took 1.077203412s to wait for apiserver process to appear ...
	I0203 15:13:58.896225   21936 api_server.go:87] waiting for apiserver healthz status ...
	I0203 15:13:58.896242   21936 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55531/healthz ...
	I0203 15:13:58.899014   21936 api_server.go:268] stopped: https://127.0.0.1:55531/healthz: Get "https://127.0.0.1:55531/healthz": EOF
	I0203 15:13:59.399386   21936 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55531/healthz ...
	I0203 15:14:01.206314   21936 api_server.go:278] https://127.0.0.1:55531/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 15:14:01.206334   21936 api_server.go:102] status: https://127.0.0.1:55531/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 15:14:01.399416   21936 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55531/healthz ...
	I0203 15:14:01.405949   21936 api_server.go:278] https://127.0.0.1:55531/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 15:14:01.405963   21936 api_server.go:102] status: https://127.0.0.1:55531/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 15:14:01.900390   21936 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55531/healthz ...
	I0203 15:14:01.906052   21936 api_server.go:278] https://127.0.0.1:55531/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 15:14:01.906064   21936 api_server.go:102] status: https://127.0.0.1:55531/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 15:14:02.399452   21936 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:55531/healthz ...
	I0203 15:14:02.404790   21936 api_server.go:278] https://127.0.0.1:55531/healthz returned 200:
	ok
	I0203 15:14:02.411271   21936 api_server.go:140] control plane version: v1.26.1
	I0203 15:14:02.411285   21936 api_server.go:130] duration metric: took 3.514973691s to wait for apiserver health ...
	I0203 15:14:02.411292   21936 cni.go:84] Creating CNI manager for ""
	I0203 15:14:02.411301   21936 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 15:14:02.449292   21936 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 15:13:58.731472   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:13:58.731677   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:14:02.475896   21936 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 15:14:02.485058   21936 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0203 15:14:02.498094   21936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 15:14:02.505972   21936 system_pods.go:59] 8 kube-system pods found
	I0203 15:14:02.505987   21936 system_pods.go:61] "coredns-787d4945fb-m988z" [89217f3a-4d6b-443a-812b-9a6ad117ca19] Running
	I0203 15:14:02.505992   21936 system_pods.go:61] "etcd-embed-certs-913000" [cf2035a3-f4f9-4637-8060-375a7af70aac] Running
	I0203 15:14:02.505995   21936 system_pods.go:61] "kube-apiserver-embed-certs-913000" [e59c2a4d-8b9f-4447-93b8-573a4978202e] Running
	I0203 15:14:02.505999   21936 system_pods.go:61] "kube-controller-manager-embed-certs-913000" [8c258bd4-cd42-4319-b929-141995ebccd7] Running
	I0203 15:14:02.506002   21936 system_pods.go:61] "kube-proxy-97s59" [5705aff1-3b94-4ecf-923e-4863d7460bf6] Running
	I0203 15:14:02.506007   21936 system_pods.go:61] "kube-scheduler-embed-certs-913000" [4f3d46ed-3201-435f-b0c5-11f872264cdf] Running
	I0203 15:14:02.506017   21936 system_pods.go:61] "metrics-server-7997d45854-9rm6r" [84c3818b-dfe8-40a3-abda-4e86ae7284de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 15:14:02.506022   21936 system_pods.go:61] "storage-provisioner" [615c7cac-9752-462b-be04-101a325aa4c9] Running
	I0203 15:14:02.506026   21936 system_pods.go:74] duration metric: took 7.923768ms to wait for pod list to return data ...
	I0203 15:14:02.506033   21936 node_conditions.go:102] verifying NodePressure condition ...
	I0203 15:14:02.509396   21936 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0203 15:14:02.509409   21936 node_conditions.go:123] node cpu capacity is 6
	I0203 15:14:02.509416   21936 node_conditions.go:105] duration metric: took 3.380092ms to run NodePressure ...
	I0203 15:14:02.509427   21936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:14:02.689136   21936 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0203 15:14:02.693770   21936 kubeadm.go:784] kubelet initialised
	I0203 15:14:02.693782   21936 kubeadm.go:785] duration metric: took 4.632137ms waiting for restarted kubelet to initialise ...
	I0203 15:14:02.693791   21936 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 15:14:02.700586   21936 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-m988z" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:02.706058   21936 pod_ready.go:92] pod "coredns-787d4945fb-m988z" in "kube-system" namespace has status "Ready":"True"
	I0203 15:14:02.706070   21936 pod_ready.go:81] duration metric: took 5.472249ms waiting for pod "coredns-787d4945fb-m988z" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:02.706078   21936 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-913000" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:02.710773   21936 pod_ready.go:92] pod "etcd-embed-certs-913000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:14:02.710784   21936 pod_ready.go:81] duration metric: took 4.700517ms waiting for pod "etcd-embed-certs-913000" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:02.710792   21936 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-913000" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:02.715764   21936 pod_ready.go:92] pod "kube-apiserver-embed-certs-913000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:14:02.715772   21936 pod_ready.go:81] duration metric: took 4.975993ms waiting for pod "kube-apiserver-embed-certs-913000" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:02.715779   21936 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-913000" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:02.902796   21936 pod_ready.go:92] pod "kube-controller-manager-embed-certs-913000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:14:02.902809   21936 pod_ready.go:81] duration metric: took 187.020663ms waiting for pod "kube-controller-manager-embed-certs-913000" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:02.902816   21936 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-97s59" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:03.301768   21936 pod_ready.go:92] pod "kube-proxy-97s59" in "kube-system" namespace has status "Ready":"True"
	I0203 15:14:03.301779   21936 pod_ready.go:81] duration metric: took 398.950288ms waiting for pod "kube-proxy-97s59" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:03.301787   21936 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-913000" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:05.709627   21936 pod_ready.go:102] pod "kube-scheduler-embed-certs-913000" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:07.710224   21936 pod_ready.go:102] pod "kube-scheduler-embed-certs-913000" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:10.208469   21936 pod_ready.go:92] pod "kube-scheduler-embed-certs-913000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:14:10.208484   21936 pod_ready.go:81] duration metric: took 6.906536563s waiting for pod "kube-scheduler-embed-certs-913000" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:10.208490   21936 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace to be "Ready" ...
	I0203 15:14:12.221259   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:14.719549   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:16.720696   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:18.721040   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:20.721109   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:18.732332   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:14:18.732504   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:14:23.220489   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:25.221441   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:27.722202   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:30.220188   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:32.221289   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:34.721397   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:36.721514   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:39.220942   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:41.721354   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:44.222409   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:46.720128   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:48.722271   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:50.723792   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:53.221378   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:55.720323   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:57.721861   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:59.722423   21936 pod_ready.go:102] pod "metrics-server-7997d45854-9rm6r" in "kube-system" namespace has status "Ready":"False"
	I0203 15:14:58.734150   21126 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0203 15:14:58.734308   21126 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0203 15:14:58.734321   21126 kubeadm.go:322] 
	I0203 15:14:58.734347   21126 kubeadm.go:322] Unfortunately, an error has occurred:
	I0203 15:14:58.734373   21126 kubeadm.go:322] 	timed out waiting for the condition
	I0203 15:14:58.734377   21126 kubeadm.go:322] 
	I0203 15:14:58.734399   21126 kubeadm.go:322] This error is likely caused by:
	I0203 15:14:58.734423   21126 kubeadm.go:322] 	- The kubelet is not running
	I0203 15:14:58.734491   21126 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0203 15:14:58.734496   21126 kubeadm.go:322] 
	I0203 15:14:58.734576   21126 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0203 15:14:58.734609   21126 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0203 15:14:58.734634   21126 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0203 15:14:58.734638   21126 kubeadm.go:322] 
	I0203 15:14:58.734728   21126 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0203 15:14:58.734804   21126 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0203 15:14:58.734872   21126 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0203 15:14:58.734918   21126 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0203 15:14:58.734971   21126 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0203 15:14:58.734995   21126 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0203 15:14:58.738313   21126 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0203 15:14:58.738411   21126 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0203 15:14:58.738538   21126 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0203 15:14:58.738620   21126 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 15:14:58.738738   21126 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0203 15:14:58.738798   21126 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0203 15:14:58.738850   21126 kubeadm.go:403] StartCluster complete in 8m5.206203453s
	I0203 15:14:58.738943   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:14:58.762138   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.762150   21126 logs.go:281] No container was found matching "kube-apiserver"
	I0203 15:14:58.762220   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:14:58.786353   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.786367   21126 logs.go:281] No container was found matching "etcd"
	I0203 15:14:58.786448   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:14:58.811734   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.811747   21126 logs.go:281] No container was found matching "coredns"
	I0203 15:14:58.811820   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:14:58.834722   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.834736   21126 logs.go:281] No container was found matching "kube-scheduler"
	I0203 15:14:58.834805   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:14:58.858239   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.858253   21126 logs.go:281] No container was found matching "kube-proxy"
	I0203 15:14:58.858323   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:14:58.882446   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.882458   21126 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0203 15:14:58.882525   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:14:58.906762   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.906776   21126 logs.go:281] No container was found matching "storage-provisioner"
	I0203 15:14:58.906842   21126 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:14:58.931071   21126 logs.go:279] 0 containers: []
	W0203 15:14:58.931085   21126 logs.go:281] No container was found matching "kube-controller-manager"
	I0203 15:14:58.931093   21126 logs.go:124] Gathering logs for kubelet ...
	I0203 15:14:58.931100   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:14:58.968187   21126 logs.go:124] Gathering logs for dmesg ...
	I0203 15:14:58.968203   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:14:58.980491   21126 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:14:58.980504   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0203 15:14:59.034723   21126 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0203 15:14:59.034737   21126 logs.go:124] Gathering logs for Docker ...
	I0203 15:14:59.034744   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:14:59.050311   21126 logs.go:124] Gathering logs for container status ...
	I0203 15:14:59.050325   21126 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:15:01.100139   21126 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049755261s)
	W0203 15:15:01.100252   21126 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0203 15:15:01.100270   21126 out.go:239] * 
	W0203 15:15:01.100389   21126 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 15:15:01.100415   21126 out.go:239] * 
	W0203 15:15:01.101097   21126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0203 15:15:01.185870   21126 out.go:177] 
	W0203 15:15:01.228585   21126 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0203 15:15:01.228653   21126 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0203 15:15:01.228686   21126 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0203 15:15:01.249636   21126 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-03 23:06:49 UTC, end at Fri 2023-02-03 23:15:02 UTC. --
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Started Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.539155107Z" level=info msg="Processing signal 'terminated'"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.540022579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.540232660Z" level=info msg="Daemon shutdown complete"
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: docker.service: Succeeded.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Starting Docker Application Container Engine...
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.587557248Z" level=info msg="Starting up"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589324775Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589361076Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589385186Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589394737Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590574981Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590616786Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590634858Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590645110Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.597659541Z" level=info msg="Loading containers: start."
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.674141602Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.707135159Z" level=info msg="Loading containers: done."
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.715870675Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.715965108Z" level=info msg="Daemon has completed initialization"
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Started Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.736641748Z" level=info msg="API listen on [::]:2376"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.743050535Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-03T23:15:04Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:15:05 up  1:14,  0 users,  load average: 0.57, 1.09, 1.30
	Linux old-k8s-version-136000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-03 23:06:49 UTC, end at Fri 2023-02-03 23:15:05 UTC. --
	Feb 03 23:15:03 old-k8s-version-136000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 23:15:03 old-k8s-version-136000 kubelet[14736]: I0203 23:15:03.940884   14736 server.go:410] Version: v1.16.0
	Feb 03 23:15:03 old-k8s-version-136000 kubelet[14736]: I0203 23:15:03.941173   14736 plugins.go:100] No cloud provider specified.
	Feb 03 23:15:03 old-k8s-version-136000 kubelet[14736]: I0203 23:15:03.941208   14736 server.go:773] Client rotation is on, will bootstrap in background
	Feb 03 23:15:03 old-k8s-version-136000 kubelet[14736]: I0203 23:15:03.943003   14736 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 23:15:03 old-k8s-version-136000 kubelet[14736]: W0203 23:15:03.945332   14736 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 03 23:15:03 old-k8s-version-136000 kubelet[14736]: W0203 23:15:03.945560   14736 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 03 23:15:03 old-k8s-version-136000 kubelet[14736]: F0203 23:15:03.945617   14736 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 03 23:15:03 old-k8s-version-136000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 23:15:03 old-k8s-version-136000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 23:15:04 old-k8s-version-136000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Feb 03 23:15:04 old-k8s-version-136000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 23:15:04 old-k8s-version-136000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 23:15:04 old-k8s-version-136000 kubelet[14752]: I0203 23:15:04.686775   14752 server.go:410] Version: v1.16.0
	Feb 03 23:15:04 old-k8s-version-136000 kubelet[14752]: I0203 23:15:04.686993   14752 plugins.go:100] No cloud provider specified.
	Feb 03 23:15:04 old-k8s-version-136000 kubelet[14752]: I0203 23:15:04.687002   14752 server.go:773] Client rotation is on, will bootstrap in background
	Feb 03 23:15:04 old-k8s-version-136000 kubelet[14752]: I0203 23:15:04.688761   14752 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 23:15:04 old-k8s-version-136000 kubelet[14752]: W0203 23:15:04.689520   14752 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 03 23:15:04 old-k8s-version-136000 kubelet[14752]: W0203 23:15:04.689603   14752 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 03 23:15:04 old-k8s-version-136000 kubelet[14752]: F0203 23:15:04.689626   14752 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 03 23:15:04 old-k8s-version-136000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 23:15:04 old-k8s-version-136000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 23:15:05 old-k8s-version-136000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 163.
	Feb 03 23:15:05 old-k8s-version-136000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 23:15:05 old-k8s-version-136000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 15:15:05.097962   22104 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 2 (422.719394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-136000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (497.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0203 15:15:13.722502    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:15:14.286228    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:15:14.311232    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:15:37.655960    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:15:53.079131    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:16:10.731524    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:16:35.704557    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 15:16:37.330336    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:16:51.841130    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:17:00.010229    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:17:18.138926    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:17:29.880802    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:17:57.566996    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:18:14.885266    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:18:18.564057    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:18:30.639995    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:18:41.191713    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:19:01.939244    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:19:38.758085    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:19:41.620200    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:20:14.292346    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:20:14.319515    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:20:25.004233    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:20:37.662650    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:20:53.084105    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:21:10.739429    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:21:35.710025    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 15:21:37.369998    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:21:51.847048    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:22:00.017170    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 15:22:00.711471    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:22:16.140651    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 15:22:18.145629    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:22:29.887966    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:23:30.644821    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:24:01.944587    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 2 (404.816571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-136000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-136000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-136000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423",
	        "Created": "2023-02-03T23:01:11.889189264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T23:06:49.643869127Z",
	            "FinishedAt": "2023-02-03T23:06:46.709273842Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hostname",
	        "HostsPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hosts",
	        "LogPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423-json.log",
	        "Name": "/old-k8s-version-136000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b43f019aed40f7f6d26e5fc19850e1e26591afe1aebb383bfc62a7e02b87e1da",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55352"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55353"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55356"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b43f019aed40",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "845795d4cf37",
	                        "old-k8s-version-136000"
	                    ],
	                    "NetworkID": "a4c82c2a3592223db620bf95332091613324019646bbe58152af123c5085aba4",
	                    "EndpointID": "9d19243bdc4b0034b95a676b71e1e9f6a1d25ba7078faa4d4b80def87e2b6889",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 2 (417.044681ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-136000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-136000 logs -n 25: (3.456436177s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-136000   | old-k8s-version-136000       | jenkins | v1.29.0 | 03 Feb 23 15:05 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-136000                         | old-k8s-version-136000       | jenkins | v1.29.0 | 03 Feb 23 15:06 PST | 03 Feb 23 15:06 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-136000        | old-k8s-version-136000       | jenkins | v1.29.0 | 03 Feb 23 15:06 PST | 03 Feb 23 15:06 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-136000                         | old-k8s-version-136000       | jenkins | v1.29.0 | 03 Feb 23 15:06 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-520000 sudo                         | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-520000                              | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-520000                              | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-520000                              | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	| delete  | -p no-preload-520000                              | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	| start   | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:13 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-913000       | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-913000            | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:22 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-913000 sudo                        | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	| delete  | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	| delete  | -p                                                | disable-driver-mounts-350000 | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	|         | disable-driver-mounts-350000                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:24 PST |
	|         | default-k8s-diff-port-893000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:24 PST | 03 Feb 23 15:24 PST |
	|         | default-k8s-diff-port-893000                      |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:24 PST | 03 Feb 23 15:24 PST |
	|         | default-k8s-diff-port-893000                      |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893000  | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:24 PST | 03 Feb 23 15:24 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:24 PST |                     |
	|         | default-k8s-diff-port-893000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 15:24:27
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 15:24:27.693221   23025 out.go:296] Setting OutFile to fd 1 ...
	I0203 15:24:27.693404   23025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:24:27.693409   23025 out.go:309] Setting ErrFile to fd 2...
	I0203 15:24:27.693413   23025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:24:27.693508   23025 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 15:24:27.693987   23025 out.go:303] Setting JSON to false
	I0203 15:24:27.713270   23025 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5042,"bootTime":1675461625,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 15:24:27.713368   23025 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 15:24:27.735559   23025 out.go:177] * [default-k8s-diff-port-893000] minikube v1.29.0 on Darwin 13.2
	I0203 15:24:27.778486   23025 notify.go:220] Checking for updates...
	I0203 15:24:27.800288   23025 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 15:24:27.842263   23025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:24:27.864420   23025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 15:24:27.886012   23025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 15:24:27.907260   23025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 15:24:27.928229   23025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 15:24:27.949709   23025 config.go:180] Loaded profile config "default-k8s-diff-port-893000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 15:24:27.950253   23025 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 15:24:28.010607   23025 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 15:24:28.010761   23025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:24:28.152261   23025 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:24:28.060264062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:24:28.195923   23025 out.go:177] * Using the docker driver based on existing profile
	I0203 15:24:28.217822   23025 start.go:296] selected driver: docker
	I0203 15:24:28.217850   23025 start.go:857] validating driver "docker" against &{Name:default-k8s-diff-port-893000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-893000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:24:28.218020   23025 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 15:24:28.221864   23025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:24:28.364904   23025 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:24:28.273449601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:24:28.365054   23025 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 15:24:28.365073   23025 cni.go:84] Creating CNI manager for ""
	I0203 15:24:28.365086   23025 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 15:24:28.365094   23025 start_flags.go:319] config:
	{Name:default-k8s-diff-port-893000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-893000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:24:28.387171   23025 out.go:177] * Starting control plane node default-k8s-diff-port-893000 in cluster default-k8s-diff-port-893000
	I0203 15:24:28.408844   23025 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 15:24:28.430693   23025 out.go:177] * Pulling base image ...
	I0203 15:24:28.472815   23025 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 15:24:28.472827   23025 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 15:24:28.472913   23025 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0203 15:24:28.472933   23025 cache.go:57] Caching tarball of preloaded images
	I0203 15:24:28.473150   23025 preload.go:174] Found /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 15:24:28.473172   23025 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0203 15:24:28.474187   23025 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/config.json ...
	I0203 15:24:28.532335   23025 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 15:24:28.532349   23025 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 15:24:28.532369   23025 cache.go:193] Successfully downloaded all kic artifacts
	I0203 15:24:28.532407   23025 start.go:364] acquiring machines lock for default-k8s-diff-port-893000: {Name:mk878f02f565e8fdfaecc254209cf866c1a40f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 15:24:28.532500   23025 start.go:368] acquired machines lock for "default-k8s-diff-port-893000" in 66.801µs
	I0203 15:24:28.532528   23025 start.go:96] Skipping create...Using existing machine configuration
	I0203 15:24:28.532540   23025 fix.go:55] fixHost starting: 
	I0203 15:24:28.532776   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:24:28.589680   23025 fix.go:103] recreateIfNeeded on default-k8s-diff-port-893000: state=Stopped err=<nil>
	W0203 15:24:28.589708   23025 fix.go:129] unexpected machine state, will restart: <nil>
	I0203 15:24:28.633186   23025 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-893000" ...
	I0203 15:24:28.654276   23025 cli_runner.go:164] Run: docker start default-k8s-diff-port-893000
	I0203 15:24:28.986255   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:24:29.045886   23025 kic.go:426] container "default-k8s-diff-port-893000" state is running.
	I0203 15:24:29.046498   23025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-893000
	I0203 15:24:29.107171   23025 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/config.json ...
	I0203 15:24:29.107587   23025 machine.go:88] provisioning docker machine ...
	I0203 15:24:29.107618   23025 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-893000"
	I0203 15:24:29.107684   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:29.180431   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:29.180651   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:29.180665   23025 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-893000 && echo "default-k8s-diff-port-893000" | sudo tee /etc/hostname
	I0203 15:24:29.338253   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-893000
	
	I0203 15:24:29.338368   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:29.400238   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:29.400402   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:29.400419   23025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-893000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-893000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-893000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 15:24:29.535058   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:24:29.535084   23025 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
	I0203 15:24:29.535100   23025 ubuntu.go:177] setting up certificates
	I0203 15:24:29.535109   23025 provision.go:83] configureAuth start
	I0203 15:24:29.535190   23025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-893000
	I0203 15:24:29.592588   23025 provision.go:138] copyHostCerts
	I0203 15:24:29.592685   23025 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
	I0203 15:24:29.592696   23025 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 15:24:29.592798   23025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
	I0203 15:24:29.593016   23025 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
	I0203 15:24:29.593023   23025 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 15:24:29.593090   23025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
	I0203 15:24:29.593240   23025 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
	I0203 15:24:29.593248   23025 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 15:24:29.593311   23025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
	I0203 15:24:29.593434   23025 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-893000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-893000]
	I0203 15:24:29.649673   23025 provision.go:172] copyRemoteCerts
	I0203 15:24:29.649736   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 15:24:29.649789   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:29.708342   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:29.801386   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 15:24:29.818642   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0203 15:24:29.835979   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 15:24:29.852903   23025 provision.go:86] duration metric: configureAuth took 317.7715ms
	I0203 15:24:29.852920   23025 ubuntu.go:193] setting minikube options for container-runtime
	I0203 15:24:29.853089   23025 config.go:180] Loaded profile config "default-k8s-diff-port-893000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 15:24:29.853156   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:29.910182   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:29.910340   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:29.910349   23025 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 15:24:30.040135   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 15:24:30.040147   23025 ubuntu.go:71] root file system type: overlay
	I0203 15:24:30.040290   23025 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 15:24:30.040370   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.097456   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:30.097608   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:30.097665   23025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 15:24:30.232368   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 15:24:30.232463   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.289010   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:30.289172   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:30.289186   23025 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 15:24:30.418349   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:24:30.418364   23025 machine.go:91] provisioned docker machine in 1.310739819s
	I0203 15:24:30.418371   23025 start.go:300] post-start starting for "default-k8s-diff-port-893000" (driver="docker")
	I0203 15:24:30.418377   23025 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 15:24:30.418443   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 15:24:30.418497   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.475837   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:30.569446   23025 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 15:24:30.573137   23025 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 15:24:30.573153   23025 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 15:24:30.573160   23025 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 15:24:30.573165   23025 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 15:24:30.573172   23025 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
	I0203 15:24:30.573265   23025 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
	I0203 15:24:30.573420   23025 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
	I0203 15:24:30.573591   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 15:24:30.581005   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:24:30.598147   23025 start.go:303] post-start completed in 179.76249ms
	I0203 15:24:30.598225   23025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 15:24:30.598278   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.654761   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:30.742829   23025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 15:24:30.747762   23025 fix.go:57] fixHost completed within 2.215169081s
	I0203 15:24:30.747781   23025 start.go:83] releasing machines lock for "default-k8s-diff-port-893000", held for 2.215224168s
	I0203 15:24:30.747864   23025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-893000
	I0203 15:24:30.806626   23025 ssh_runner.go:195] Run: cat /version.json
	I0203 15:24:30.806629   23025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 15:24:30.806695   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.806720   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.867933   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:30.868192   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:30.956694   23025 ssh_runner.go:195] Run: systemctl --version
	I0203 15:24:31.013369   23025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 15:24:31.019225   23025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 15:24:31.035150   23025 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 15:24:31.035255   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 15:24:31.043047   23025 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0203 15:24:31.055820   23025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 15:24:31.063539   23025 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0203 15:24:31.063557   23025 start.go:483] detecting cgroup driver to use...
	I0203 15:24:31.063568   23025 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:24:31.063650   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:24:31.076599   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0203 15:24:31.085219   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 15:24:31.093605   23025 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 15:24:31.093669   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 15:24:31.102210   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:24:31.110761   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 15:24:31.119347   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:24:31.127776   23025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 15:24:31.135488   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 15:24:31.143935   23025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 15:24:31.151381   23025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 15:24:31.158558   23025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:24:31.224059   23025 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 15:24:31.298280   23025 start.go:483] detecting cgroup driver to use...
	I0203 15:24:31.298303   23025 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:24:31.298383   23025 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 15:24:31.308937   23025 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 15:24:31.309004   23025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 15:24:31.319042   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:24:31.333401   23025 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 15:24:31.434618   23025 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 15:24:31.531084   23025 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 15:24:31.531102   23025 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 15:24:31.544043   23025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:24:31.636041   23025 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 15:24:31.923477   23025 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 15:24:31.995079   23025 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 15:24:32.075303   23025 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 15:24:32.142629   23025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:24:32.215735   23025 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 15:24:32.227510   23025 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 15:24:32.227599   23025 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 15:24:32.231817   23025 start.go:551] Will wait 60s for crictl version
	I0203 15:24:32.231866   23025 ssh_runner.go:195] Run: which crictl
	I0203 15:24:32.235408   23025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 15:24:32.346409   23025 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0203 15:24:32.346489   23025 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:24:32.375797   23025 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:24:32.454282   23025 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0203 15:24:32.454426   23025 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-893000 dig +short host.docker.internal
	I0203 15:24:32.607333   23025 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0203 15:24:32.607444   23025 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0203 15:24:32.611933   23025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:24:32.621797   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:32.679249   23025 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 15:24:32.679318   23025 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-03 23:06:49 UTC, end at Fri 2023-02-03 23:24:37 UTC. --
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Started Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.539155107Z" level=info msg="Processing signal 'terminated'"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.540022579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.540232660Z" level=info msg="Daemon shutdown complete"
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: docker.service: Succeeded.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Starting Docker Application Container Engine...
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.587557248Z" level=info msg="Starting up"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589324775Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589361076Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589385186Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589394737Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590574981Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590616786Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590634858Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590645110Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.597659541Z" level=info msg="Loading containers: start."
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.674141602Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.707135159Z" level=info msg="Loading containers: done."
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.715870675Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.715965108Z" level=info msg="Daemon has completed initialization"
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Started Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.736641748Z" level=info msg="API listen on [::]:2376"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.743050535Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-03T23:24:39Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:24:40 up  1:23,  0 users,  load average: 0.30, 0.49, 0.88
	Linux old-k8s-version-136000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-03 23:06:49 UTC, end at Fri 2023-02-03 23:24:40 UTC. --
	Feb 03 23:24:38 old-k8s-version-136000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24882]: I0203 23:24:39.193206   24882 server.go:410] Version: v1.16.0
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24882]: I0203 23:24:39.193439   24882 plugins.go:100] No cloud provider specified.
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24882]: I0203 23:24:39.193450   24882 server.go:773] Client rotation is on, will bootstrap in background
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24882]: I0203 23:24:39.195209   24882 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24882]: W0203 23:24:39.195929   24882 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24882]: W0203 23:24:39.196000   24882 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24882]: F0203 23:24:39.196025   24882 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24905]: I0203 23:24:39.949378   24905 server.go:410] Version: v1.16.0
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24905]: I0203 23:24:39.949635   24905 plugins.go:100] No cloud provider specified.
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24905]: I0203 23:24:39.949646   24905 server.go:773] Client rotation is on, will bootstrap in background
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24905]: I0203 23:24:39.951536   24905 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24905]: W0203 23:24:39.952248   24905 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24905]: W0203 23:24:39.952315   24905 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 03 23:24:39 old-k8s-version-136000 kubelet[24905]: F0203 23:24:39.952341   24905 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 23:24:39 old-k8s-version-136000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 15:24:39.988647   23125 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 2 (402.234473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-136000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:25:14.296333    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:25:14.320340    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:25:37.664500    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:25:53.085999    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:26:10.740027    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:26:35.713041    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:26:51.848677    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:27:00.018360    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:27:18.147169    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:27:29.889934    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:27:33.806158    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:28:18.572340    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:28:30.646978    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:28:52.936817    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:29:01.945620    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:30:03.109989    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:30:14.301841    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:30:14.326777    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:30:37.670365    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/false-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0203 15:30:53.092327    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55356/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:31:10.746196    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:31:33.753764    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:31:35.718398    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:31:51.854848    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:32:00.025013    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:32:18.153117    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:32:29.895037    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:33:17.348908    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:33:18.578117    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0203 15:33:30.652030    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 2 (404.610078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-136000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-136000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-136000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.353µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-136000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-136000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-136000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423",
	        "Created": "2023-02-03T23:01:11.889189264Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-03T23:06:49.643869127Z",
	            "FinishedAt": "2023-02-03T23:06:46.709273842Z"
	        },
	        "Image": "sha256:5f59734230331367fdba579a7224885a8ca1b2b3a1b0a3db04074b5e8b329b90",
	        "ResolvConfPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hostname",
	        "HostsPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/hosts",
	        "LogPath": "/var/lib/docker/containers/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423/845795d4cf37caeef2ebc39507d52b464cb71df8ed223e86fa4ff055f8487423-json.log",
	        "Name": "/old-k8s-version-136000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5-init/diff:/var/lib/docker/overlay2/48b9eff26e94f4439154aad348135bd66f3f3733ee1f2bd22fc60e3a240f764f/diff:/var/lib/docker/overlay2/89930e70b646c5893dab0f6f4274a9fb3b60a11d62da2f59d4b55fbf1c480a90/diff:/var/lib/docker/overlay2/3ae0575a256264d050211e3ca122b2804683b9f4323f7a2c2a2d45f4df3254dd/diff:/var/lib/docker/overlay2/6468a293a6ba199c732872fb7807de809fa2ff9ecdccaeb7146f28e1a4dc9607/diff:/var/lib/docker/overlay2/3fab248b5834a764e1996b2fea0af0100ffc2c150728124745a8e42d43a2193d/diff:/var/lib/docker/overlay2/1ec21b4015d44918fda148d959030dadcaa3527172fde96571978bdabab6921e/diff:/var/lib/docker/overlay2/5465a266a0268ad0ffa1c12afbc320e2232b025ee4eaa5c74b2f5b236ce5285d/diff:/var/lib/docker/overlay2/61b7474b98e6431b966662b98c31f46eb982bdd7098bfccdad928e6c3c0a9024/diff:/var/lib/docker/overlay2/d0925bff8df24b32d176f1438969c0c3adac5ec1bc1da61c2a8bf17e4fd9313b/diff:/var/lib/docker/overlay2/b6c213
617f12dea208efc9c642db1147a22658b32383a0256106a994fcafebca/diff:/var/lib/docker/overlay2/5127e35d4cf68de9ece51806ff390f9b88bac61eaa8bfdf4cf5d6ab1e5b2ca27/diff:/var/lib/docker/overlay2/3d041d254d21e7ec2e2abdce56a3e6eadb3f668238bf3667e7c25effdcc05940/diff:/var/lib/docker/overlay2/15bab989d641601a640d89b58f645e79668cb801bf10066ecd9790e4c8bbd4f1/diff:/var/lib/docker/overlay2/d6e45696a59c84a5b4ad5ad0bec8b561335a71b3c4eaaa35bcbcc00bd3fbcc1a/diff:/var/lib/docker/overlay2/d0a13d3859926a84eb9c7b571fa8c670d15ebf0ab75e6e8971a7b8679b316ca1/diff:/var/lib/docker/overlay2/a5096e1509a8455c4d67f60b17102a08c795ad1bdbeeac3dd75c3b05ec6d922c/diff:/var/lib/docker/overlay2/aeeda7f653d5dcfbb5ef8a7b53a6aba12a5892c04d984f10a71be11833addb2d/diff:/var/lib/docker/overlay2/84bf768303dfde933d5690feb659b1acd5419ca63d78c4760218d578794c3bbe/diff:/var/lib/docker/overlay2/dec6762f77828143e0cb548cc3a6bb9cc10b9f4376070bc49558da8dfd0b7d2e/diff:/var/lib/docker/overlay2/cc9805f6c705d4d0c6c7675e7745ab0dcdd90879809a2089256c0606e80cee7a/diff:/var/lib/d
ocker/overlay2/e34b4063934c19fe1e614a10ef1e9582f55283fa37c9d0b89d0df8ca32a8a03a/diff:/var/lib/docker/overlay2/c6b6cf801ae9739234022d5e5c55176ee1249b3441400f8b9dbde2c15c6d66e3/diff:/var/lib/docker/overlay2/73dfe58a9f4125f321d10ef97d5c2d4951480455bb243f166600ead63c22f5c2/diff:/var/lib/docker/overlay2/476ba412f9e61cc020124b5051db9c99ea08176881e535e0b5fe6ddb51b94a72/diff:/var/lib/docker/overlay2/2729a4e84f2d55dc49c9417254fc26c0baa21f93cd9b58386f869cf5add162c1/diff:/var/lib/docker/overlay2/8523001ce06172b58b31ebf311f62bf435ed3a3d48fec58d3f1239f29386a28b/diff:/var/lib/docker/overlay2/2b7edb3177897200229f3ba188cfd00e16df91cf85b91a5f08ddbfa15d898a3d/diff:/var/lib/docker/overlay2/94231ff2ac5bf304d3c25d204f1a7b2195ef2230bfbb7bb5a1a1d6f2f4faad6a/diff:/var/lib/docker/overlay2/698d3cd800bae40e0aeb942360c67b793550c24bab66ba43080cbcaa500a9069/diff:/var/lib/docker/overlay2/6aadd46423b70866f00e0f4f83310711c1bc22b4dc8989e6b58cd6254540c428/diff:/var/lib/docker/overlay2/035afbe91bfd3bebd444b29f3ceed1e954aab275fca0c8aaf2364df71f4
6e0c3/diff:/var/lib/docker/overlay2/bc68049ba1568fe8bb188720c62bcc993e62a364901ba41a533aa2991cceaf82/diff:/var/lib/docker/overlay2/c3373595ff40ba0ece2698f99fc2e1c9a83c0ef6a1df119125e3009256dee2ed/diff:/var/lib/docker/overlay2/59c87dca7d8987a7e1b5cd959772e06b96d6ecb36399ff9e35a1ecfe4ed33345/diff:/var/lib/docker/overlay2/22434c33a4994657a469b040789f269ac912f4046d76f2531dff05de4700fb3b/diff:/var/lib/docker/overlay2/699ea76dd0a43fedc031501535714f087d7ec3f37593390c9e81c029373c7f8f/diff:/var/lib/docker/overlay2/e9414c264977801651ed9f3ee268cd0f245614747e184e8f3170e1e95d1fc081/diff:/var/lib/docker/overlay2/2781a0c689754699793aa9bdfeeabdaa1c6905e265302dd267c6c12daa01eb9c/diff:/var/lib/docker/overlay2/4b59a1fc73d3e865eaf7e2e62fd6d2808234c79d79b6b30f6b1a482a291580d3/diff:/var/lib/docker/overlay2/7f51e83dcff3227064daa2b7cc6a7c87f8f5e415fa8723316c24512d6029941d/diff:/var/lib/docker/overlay2/50662c60babc4d383f2af76fc66f3712bcc9e85a50f0525fa680c8336af46ce3/diff:/var/lib/docker/overlay2/2112d8437fae31ae95f85bdf08e3f29d09d7b8
adf34c9608a2e3bfecc049e0c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8fab6906b656bcd6c37bac3122f87989b3f1a374377d9b548832f7a05b7f2d5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b43f019aed40f7f6d26e5fc19850e1e26591afe1aebb383bfc62a7e02b87e1da",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55352"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55353"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55356"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b43f019aed40",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "845795d4cf37",
	                        "old-k8s-version-136000"
	                    ],
	                    "NetworkID": "a4c82c2a3592223db620bf95332091613324019646bbe58152af123c5085aba4",
	                    "EndpointID": "9d19243bdc4b0034b95a676b71e1e9f6a1d25ba7078faa4d4b80def87e2b6889",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 2 (407.125639ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-136000 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-136000 logs -n 25: (3.564711069s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-136000   | old-k8s-version-136000       | jenkins | v1.29.0 | 03 Feb 23 15:05 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-136000                         | old-k8s-version-136000       | jenkins | v1.29.0 | 03 Feb 23 15:06 PST | 03 Feb 23 15:06 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-136000        | old-k8s-version-136000       | jenkins | v1.29.0 | 03 Feb 23 15:06 PST | 03 Feb 23 15:06 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-136000                         | old-k8s-version-136000       | jenkins | v1.29.0 | 03 Feb 23 15:06 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-520000 sudo                         | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-520000                              | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-520000                              | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-520000                              | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	| delete  | -p no-preload-520000                              | no-preload-520000            | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:12 PST |
	| start   | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:12 PST | 03 Feb 23 15:13 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-913000       | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-913000            | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:13 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:13 PST | 03 Feb 23 15:22 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-913000 sudo                        | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	| delete  | -p embed-certs-913000                             | embed-certs-913000           | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	| delete  | -p                                                | disable-driver-mounts-350000 | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:23 PST |
	|         | disable-driver-mounts-350000                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:23 PST | 03 Feb 23 15:24 PST |
	|         | default-k8s-diff-port-893000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:24 PST | 03 Feb 23 15:24 PST |
	|         | default-k8s-diff-port-893000                      |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:24 PST | 03 Feb 23 15:24 PST |
	|         | default-k8s-diff-port-893000                      |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-893000  | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:24 PST | 03 Feb 23 15:24 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-893000 | jenkins | v1.29.0 | 03 Feb 23 15:24 PST | 03 Feb 23 15:33 PST |
	|         | default-k8s-diff-port-893000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 15:24:27
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 15:24:27.693221   23025 out.go:296] Setting OutFile to fd 1 ...
	I0203 15:24:27.693404   23025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:24:27.693409   23025 out.go:309] Setting ErrFile to fd 2...
	I0203 15:24:27.693413   23025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 15:24:27.693508   23025 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 15:24:27.693987   23025 out.go:303] Setting JSON to false
	I0203 15:24:27.713270   23025 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5042,"bootTime":1675461625,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 15:24:27.713368   23025 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 15:24:27.735559   23025 out.go:177] * [default-k8s-diff-port-893000] minikube v1.29.0 on Darwin 13.2
	I0203 15:24:27.778486   23025 notify.go:220] Checking for updates...
	I0203 15:24:27.800288   23025 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 15:24:27.842263   23025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:24:27.864420   23025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 15:24:27.886012   23025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 15:24:27.907260   23025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 15:24:27.928229   23025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 15:24:27.949709   23025 config.go:180] Loaded profile config "default-k8s-diff-port-893000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 15:24:27.950253   23025 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 15:24:28.010607   23025 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 15:24:28.010761   23025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:24:28.152261   23025 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:24:28.060264062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:24:28.195923   23025 out.go:177] * Using the docker driver based on existing profile
	I0203 15:24:28.217822   23025 start.go:296] selected driver: docker
	I0203 15:24:28.217850   23025 start.go:857] validating driver "docker" against &{Name:default-k8s-diff-port-893000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-893000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:24:28.218020   23025 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 15:24:28.221864   23025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 15:24:28.364904   23025 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 23:24:28.273449601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 15:24:28.365054   23025 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0203 15:24:28.365073   23025 cni.go:84] Creating CNI manager for ""
	I0203 15:24:28.365086   23025 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 15:24:28.365094   23025 start_flags.go:319] config:
	{Name:default-k8s-diff-port-893000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-893000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:24:28.387171   23025 out.go:177] * Starting control plane node default-k8s-diff-port-893000 in cluster default-k8s-diff-port-893000
	I0203 15:24:28.408844   23025 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 15:24:28.430693   23025 out.go:177] * Pulling base image ...
	I0203 15:24:28.472815   23025 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 15:24:28.472827   23025 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 15:24:28.472913   23025 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0203 15:24:28.472933   23025 cache.go:57] Caching tarball of preloaded images
	I0203 15:24:28.473150   23025 preload.go:174] Found /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0203 15:24:28.473172   23025 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0203 15:24:28.474187   23025 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/config.json ...
	I0203 15:24:28.532335   23025 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon, skipping pull
	I0203 15:24:28.532349   23025 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in daemon, skipping load
	I0203 15:24:28.532369   23025 cache.go:193] Successfully downloaded all kic artifacts
	I0203 15:24:28.532407   23025 start.go:364] acquiring machines lock for default-k8s-diff-port-893000: {Name:mk878f02f565e8fdfaecc254209cf866c1a40f3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0203 15:24:28.532500   23025 start.go:368] acquired machines lock for "default-k8s-diff-port-893000" in 66.801µs
	I0203 15:24:28.532528   23025 start.go:96] Skipping create...Using existing machine configuration
	I0203 15:24:28.532540   23025 fix.go:55] fixHost starting: 
	I0203 15:24:28.532776   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:24:28.589680   23025 fix.go:103] recreateIfNeeded on default-k8s-diff-port-893000: state=Stopped err=<nil>
	W0203 15:24:28.589708   23025 fix.go:129] unexpected machine state, will restart: <nil>
	I0203 15:24:28.633186   23025 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-893000" ...
	I0203 15:24:28.654276   23025 cli_runner.go:164] Run: docker start default-k8s-diff-port-893000
	I0203 15:24:28.986255   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:24:29.045886   23025 kic.go:426] container "default-k8s-diff-port-893000" state is running.
	I0203 15:24:29.046498   23025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-893000
	I0203 15:24:29.107171   23025 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/config.json ...
	I0203 15:24:29.107587   23025 machine.go:88] provisioning docker machine ...
	I0203 15:24:29.107618   23025 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-893000"
	I0203 15:24:29.107684   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:29.180431   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:29.180651   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:29.180665   23025 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-893000 && echo "default-k8s-diff-port-893000" | sudo tee /etc/hostname
	I0203 15:24:29.338253   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-893000
	
	I0203 15:24:29.338368   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:29.400238   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:29.400402   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:29.400419   23025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-893000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-893000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-893000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0203 15:24:29.535058   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:24:29.535084   23025 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15770-1719/.minikube CaCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15770-1719/.minikube}
	I0203 15:24:29.535100   23025 ubuntu.go:177] setting up certificates
	I0203 15:24:29.535109   23025 provision.go:83] configureAuth start
	I0203 15:24:29.535190   23025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-893000
	I0203 15:24:29.592588   23025 provision.go:138] copyHostCerts
	I0203 15:24:29.592685   23025 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem, removing ...
	I0203 15:24:29.592696   23025 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem
	I0203 15:24:29.592798   23025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.pem (1078 bytes)
	I0203 15:24:29.593016   23025 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem, removing ...
	I0203 15:24:29.593023   23025 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem
	I0203 15:24:29.593090   23025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/cert.pem (1123 bytes)
	I0203 15:24:29.593240   23025 exec_runner.go:144] found /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem, removing ...
	I0203 15:24:29.593248   23025 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem
	I0203 15:24:29.593311   23025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15770-1719/.minikube/key.pem (1675 bytes)
	I0203 15:24:29.593434   23025 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-893000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-893000]
	I0203 15:24:29.649673   23025 provision.go:172] copyRemoteCerts
	I0203 15:24:29.649736   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0203 15:24:29.649789   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:29.708342   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:29.801386   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0203 15:24:29.818642   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0203 15:24:29.835979   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0203 15:24:29.852903   23025 provision.go:86] duration metric: configureAuth took 317.7715ms
	I0203 15:24:29.852920   23025 ubuntu.go:193] setting minikube options for container-runtime
	I0203 15:24:29.853089   23025 config.go:180] Loaded profile config "default-k8s-diff-port-893000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 15:24:29.853156   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:29.910182   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:29.910340   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:29.910349   23025 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0203 15:24:30.040135   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0203 15:24:30.040147   23025 ubuntu.go:71] root file system type: overlay
	I0203 15:24:30.040290   23025 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0203 15:24:30.040370   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.097456   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:30.097608   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:30.097665   23025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0203 15:24:30.232368   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0203 15:24:30.232463   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.289010   23025 main.go:141] libmachine: Using SSH client type: native
	I0203 15:24:30.289172   23025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56288 <nil> <nil>}
	I0203 15:24:30.289186   23025 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0203 15:24:30.418349   23025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0203 15:24:30.418364   23025 machine.go:91] provisioned docker machine in 1.310739819s
	I0203 15:24:30.418371   23025 start.go:300] post-start starting for "default-k8s-diff-port-893000" (driver="docker")
	I0203 15:24:30.418377   23025 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0203 15:24:30.418443   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0203 15:24:30.418497   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.475837   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:30.569446   23025 ssh_runner.go:195] Run: cat /etc/os-release
	I0203 15:24:30.573137   23025 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0203 15:24:30.573153   23025 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0203 15:24:30.573160   23025 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0203 15:24:30.573165   23025 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0203 15:24:30.573172   23025 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/addons for local assets ...
	I0203 15:24:30.573265   23025 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15770-1719/.minikube/files for local assets ...
	I0203 15:24:30.573420   23025 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem -> 25682.pem in /etc/ssl/certs
	I0203 15:24:30.573591   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0203 15:24:30.581005   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:24:30.598147   23025 start.go:303] post-start completed in 179.76249ms
	I0203 15:24:30.598225   23025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 15:24:30.598278   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.654761   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:30.742829   23025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0203 15:24:30.747762   23025 fix.go:57] fixHost completed within 2.215169081s
	I0203 15:24:30.747781   23025 start.go:83] releasing machines lock for "default-k8s-diff-port-893000", held for 2.215224168s
	I0203 15:24:30.747864   23025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-893000
	I0203 15:24:30.806626   23025 ssh_runner.go:195] Run: cat /version.json
	I0203 15:24:30.806629   23025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0203 15:24:30.806695   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.806720   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:30.867933   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:30.868192   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:24:30.956694   23025 ssh_runner.go:195] Run: systemctl --version
	I0203 15:24:31.013369   23025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0203 15:24:31.019225   23025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0203 15:24:31.035150   23025 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0203 15:24:31.035255   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0203 15:24:31.043047   23025 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0203 15:24:31.055820   23025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0203 15:24:31.063539   23025 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0203 15:24:31.063557   23025 start.go:483] detecting cgroup driver to use...
	I0203 15:24:31.063568   23025 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:24:31.063650   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:24:31.076599   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0203 15:24:31.085219   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0203 15:24:31.093605   23025 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0203 15:24:31.093669   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0203 15:24:31.102210   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:24:31.110761   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0203 15:24:31.119347   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0203 15:24:31.127776   23025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0203 15:24:31.135488   23025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0203 15:24:31.143935   23025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0203 15:24:31.151381   23025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0203 15:24:31.158558   23025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:24:31.224059   23025 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0203 15:24:31.298280   23025 start.go:483] detecting cgroup driver to use...
	I0203 15:24:31.298303   23025 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0203 15:24:31.298383   23025 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0203 15:24:31.308937   23025 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0203 15:24:31.309004   23025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0203 15:24:31.319042   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0203 15:24:31.333401   23025 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0203 15:24:31.434618   23025 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0203 15:24:31.531084   23025 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0203 15:24:31.531102   23025 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0203 15:24:31.544043   23025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:24:31.636041   23025 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0203 15:24:31.923477   23025 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 15:24:31.995079   23025 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0203 15:24:32.075303   23025 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0203 15:24:32.142629   23025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0203 15:24:32.215735   23025 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0203 15:24:32.227510   23025 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0203 15:24:32.227599   23025 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0203 15:24:32.231817   23025 start.go:551] Will wait 60s for crictl version
	I0203 15:24:32.231866   23025 ssh_runner.go:195] Run: which crictl
	I0203 15:24:32.235408   23025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0203 15:24:32.346409   23025 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0203 15:24:32.346489   23025 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:24:32.375797   23025 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0203 15:24:32.454282   23025 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0203 15:24:32.454426   23025 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-893000 dig +short host.docker.internal
	I0203 15:24:32.607333   23025 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0203 15:24:32.607444   23025 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0203 15:24:32.611933   23025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:24:32.621797   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:32.679249   23025 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 15:24:32.679318   23025 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 15:24:32.704074   23025 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0203 15:24:32.715804   23025 docker.go:560] Images already preloaded, skipping extraction
	I0203 15:24:32.715943   23025 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0203 15:24:32.740367   23025 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0203 15:24:32.740392   23025 cache_images.go:84] Images are preloaded, skipping loading
	I0203 15:24:32.740481   23025 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0203 15:24:32.807886   23025 cni.go:84] Creating CNI manager for ""
	I0203 15:24:32.807904   23025 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 15:24:32.807920   23025 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0203 15:24:32.807938   23025 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-893000 NodeName:default-k8s-diff-port-893000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0203 15:24:32.808069   23025 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-893000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0203 15:24:32.808146   23025 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-893000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-893000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0203 15:24:32.808209   23025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0203 15:24:32.816379   23025 binaries.go:44] Found k8s binaries, skipping transfer
	I0203 15:24:32.816445   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0203 15:24:32.823687   23025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (460 bytes)
	I0203 15:24:32.836785   23025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0203 15:24:32.849336   23025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0203 15:24:32.862509   23025 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0203 15:24:32.866438   23025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0203 15:24:32.876519   23025 certs.go:56] Setting up /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000 for IP: 192.168.76.2
	I0203 15:24:32.876537   23025 certs.go:186] acquiring lock for shared ca certs: {Name:mkdec04c6cc16ac0dcab0ae849b602e6c1942576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:24:32.876714   23025 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key
	I0203 15:24:32.876771   23025 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key
	I0203 15:24:32.876859   23025 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.key
	I0203 15:24:32.876918   23025 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/apiserver.key.31bdca25
	I0203 15:24:32.876969   23025 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/proxy-client.key
	I0203 15:24:32.877178   23025 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem (1338 bytes)
	W0203 15:24:32.877216   23025 certs.go:397] ignoring /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568_empty.pem, impossibly tiny 0 bytes
	I0203 15:24:32.877227   23025 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca-key.pem (1675 bytes)
	I0203 15:24:32.877265   23025 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/ca.pem (1078 bytes)
	I0203 15:24:32.877297   23025 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/cert.pem (1123 bytes)
	I0203 15:24:32.877331   23025 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/certs/key.pem (1675 bytes)
	I0203 15:24:32.877398   23025 certs.go:401] found cert: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem (1708 bytes)
	I0203 15:24:32.879106   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0203 15:24:32.896855   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0203 15:24:32.913915   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0203 15:24:32.931181   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0203 15:24:32.948190   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0203 15:24:32.965453   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0203 15:24:32.982931   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0203 15:24:33.000190   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0203 15:24:33.017540   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0203 15:24:33.035266   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/certs/2568.pem --> /usr/share/ca-certificates/2568.pem (1338 bytes)
	I0203 15:24:33.052518   23025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/ssl/certs/25682.pem --> /usr/share/ca-certificates/25682.pem (1708 bytes)
	I0203 15:24:33.069522   23025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0203 15:24:33.082482   23025 ssh_runner.go:195] Run: openssl version
	I0203 15:24:33.088048   23025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2568.pem && ln -fs /usr/share/ca-certificates/2568.pem /etc/ssl/certs/2568.pem"
	I0203 15:24:33.096657   23025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2568.pem
	I0203 15:24:33.100643   23025 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb  3 22:13 /usr/share/ca-certificates/2568.pem
	I0203 15:24:33.100699   23025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2568.pem
	I0203 15:24:33.105976   23025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2568.pem /etc/ssl/certs/51391683.0"
	I0203 15:24:33.113496   23025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25682.pem && ln -fs /usr/share/ca-certificates/25682.pem /etc/ssl/certs/25682.pem"
	I0203 15:24:33.122016   23025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25682.pem
	I0203 15:24:33.126580   23025 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb  3 22:13 /usr/share/ca-certificates/25682.pem
	I0203 15:24:33.126641   23025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25682.pem
	I0203 15:24:33.132573   23025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/25682.pem /etc/ssl/certs/3ec20f2e.0"
	I0203 15:24:33.140484   23025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0203 15:24:33.149058   23025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:24:33.153333   23025 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb  3 22:08 /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:24:33.153387   23025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0203 15:24:33.159448   23025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0203 15:24:33.167462   23025 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-893000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-893000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 15:24:33.167586   23025 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 15:24:33.191809   23025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0203 15:24:33.199797   23025 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0203 15:24:33.199813   23025 kubeadm.go:633] restartCluster start
	I0203 15:24:33.199870   23025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0203 15:24:33.208621   23025 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:33.208698   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:24:33.267148   23025 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-893000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:24:33.267304   23025 kubeconfig.go:146] "default-k8s-diff-port-893000" context is missing from /Users/jenkins/minikube-integration/15770-1719/kubeconfig - will repair!
	I0203 15:24:33.267660   23025 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/kubeconfig: {Name:mkf113f45b09a6304f4248a99f0e16d42a3468fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:24:33.268936   23025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0203 15:24:33.276848   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:33.276916   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:33.285486   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:33.785567   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:33.785688   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:33.796780   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:34.286334   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:34.286500   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:34.297703   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:34.787221   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:34.787471   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:34.798726   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:35.287102   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:35.287353   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:35.298571   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:35.784960   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:35.785067   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:35.796130   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:36.285752   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:36.285816   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:36.294842   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:36.784692   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:36.784777   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:36.794074   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:37.284632   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:37.284693   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:37.294404   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:37.785386   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:37.785559   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:37.796510   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:38.285502   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:38.285727   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:38.296707   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:38.785372   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:38.785510   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:38.796523   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:39.284556   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:39.284685   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:39.295823   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:39.786127   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:39.786280   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:39.797425   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:40.284032   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:40.284168   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:40.294727   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:40.783908   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:40.783978   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:40.793781   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:41.285904   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:41.286063   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:41.296886   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:41.785852   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:41.786020   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:41.797343   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:42.284911   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:42.285103   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:42.296051   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:42.784271   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:42.784376   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:42.795808   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:43.285617   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:43.285785   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:43.296649   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:43.296657   23025 api_server.go:165] Checking apiserver status ...
	I0203 15:24:43.296708   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0203 15:24:43.305274   23025 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:43.305286   23025 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0203 15:24:43.305295   23025 kubeadm.go:1120] stopping kube-system containers ...
	I0203 15:24:43.305365   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0203 15:24:43.329207   23025 docker.go:456] Stopping containers: [38d7cb745de2 4dec76784a5d f8df6d511735 c4181f874074 6204faaa30e5 d1b6ee9ce124 884e2656370e 9d03753bd91c a88cb6fb0893 52cacc6d68f8 a49bfd3d5c68 918f836b7964 44cab55a9f58 2cc1456a5822 6c9d6902efca f1f716352a18]
	I0203 15:24:43.329289   23025 ssh_runner.go:195] Run: docker stop 38d7cb745de2 4dec76784a5d f8df6d511735 c4181f874074 6204faaa30e5 d1b6ee9ce124 884e2656370e 9d03753bd91c a88cb6fb0893 52cacc6d68f8 a49bfd3d5c68 918f836b7964 44cab55a9f58 2cc1456a5822 6c9d6902efca f1f716352a18
	I0203 15:24:43.354217   23025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0203 15:24:43.364753   23025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 15:24:43.372420   23025 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Feb  3 23:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Feb  3 23:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Feb  3 23:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Feb  3 23:23 /etc/kubernetes/scheduler.conf
	
	I0203 15:24:43.372495   23025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0203 15:24:43.380038   23025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0203 15:24:43.387424   23025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0203 15:24:43.394856   23025 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:43.394906   23025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0203 15:24:43.402109   23025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0203 15:24:43.409435   23025 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0203 15:24:43.409489   23025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0203 15:24:43.417041   23025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 15:24:43.424692   23025 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0203 15:24:43.424707   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:24:43.478937   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:24:44.200482   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:24:44.337124   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:24:44.401733   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:24:44.517817   23025 api_server.go:51] waiting for apiserver process to appear ...
	I0203 15:24:44.517894   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:24:45.028066   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:24:45.527827   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:24:45.602110   23025 api_server.go:71] duration metric: took 1.084426758s to wait for apiserver process to appear ...
	I0203 15:24:45.602204   23025 api_server.go:87] waiting for apiserver healthz status ...
	I0203 15:24:45.602262   23025 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56287/healthz ...
	I0203 15:24:45.604556   23025 api_server.go:268] stopped: https://127.0.0.1:56287/healthz: Get "https://127.0.0.1:56287/healthz": EOF
	I0203 15:24:46.104890   23025 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56287/healthz ...
	I0203 15:24:48.080875   23025 api_server.go:278] https://127.0.0.1:56287/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 15:24:48.080893   23025 api_server.go:102] status: https://127.0.0.1:56287/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 15:24:48.105424   23025 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56287/healthz ...
	I0203 15:24:48.116478   23025 api_server.go:278] https://127.0.0.1:56287/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0203 15:24:48.116498   23025 api_server.go:102] status: https://127.0.0.1:56287/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0203 15:24:48.604939   23025 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56287/healthz ...
	I0203 15:24:48.611880   23025 api_server.go:278] https://127.0.0.1:56287/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 15:24:48.611898   23025 api_server.go:102] status: https://127.0.0.1:56287/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 15:24:49.104300   23025 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56287/healthz ...
	I0203 15:24:49.110056   23025 api_server.go:278] https://127.0.0.1:56287/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0203 15:24:49.110074   23025 api_server.go:102] status: https://127.0.0.1:56287/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0203 15:24:49.604952   23025 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56287/healthz ...
	I0203 15:24:49.611938   23025 api_server.go:278] https://127.0.0.1:56287/healthz returned 200:
	ok
	I0203 15:24:49.618528   23025 api_server.go:140] control plane version: v1.26.1
	I0203 15:24:49.618541   23025 api_server.go:130] duration metric: took 4.016726948s to wait for apiserver health ...
	I0203 15:24:49.618549   23025 cni.go:84] Creating CNI manager for ""
	I0203 15:24:49.618559   23025 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 15:24:49.641804   23025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 15:24:49.678675   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 15:24:49.687238   23025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0203 15:24:49.700566   23025 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 15:24:49.708193   23025 system_pods.go:59] 8 kube-system pods found
	I0203 15:24:49.708209   23025 system_pods.go:61] "coredns-787d4945fb-8pts4" [2f9dc064-c47c-4f96-94c0-50d73ee6d52d] Running
	I0203 15:24:49.708215   23025 system_pods.go:61] "etcd-default-k8s-diff-port-893000" [aa42aede-f167-4643-95a4-adb688f31168] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0203 15:24:49.708229   23025 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893000" [1751c761-f8ba-4647-b452-e0f6f0eea6a4] Running
	I0203 15:24:49.708234   23025 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893000" [183d323e-558e-4e71-9998-b7ed474e5b8a] Running
	I0203 15:24:49.708237   23025 system_pods.go:61] "kube-proxy-sd878" [eaebef28-c216-4588-aa62-ad2f0eed3781] Running
	I0203 15:24:49.708242   23025 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893000" [a2907079-37b6-4465-b933-7ae7497ade1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0203 15:24:49.708249   23025 system_pods.go:61] "metrics-server-7997d45854-q8qw5" [ca31823e-72cc-4866-8b92-6b416afcd7a9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 15:24:49.708253   23025 system_pods.go:61] "storage-provisioner" [952b0546-7da3-4b64-9f16-6494931ea921] Running
	I0203 15:24:49.708258   23025 system_pods.go:74] duration metric: took 7.681997ms to wait for pod list to return data ...
	I0203 15:24:49.708265   23025 node_conditions.go:102] verifying NodePressure condition ...
	I0203 15:24:49.711565   23025 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0203 15:24:49.711579   23025 node_conditions.go:123] node cpu capacity is 6
	I0203 15:24:49.711588   23025 node_conditions.go:105] duration metric: took 3.319344ms to run NodePressure ...
	I0203 15:24:49.711600   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0203 15:24:49.846609   23025 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0203 15:24:49.850576   23025 kubeadm.go:784] kubelet initialised
	I0203 15:24:49.850587   23025 kubeadm.go:785] duration metric: took 3.964198ms waiting for restarted kubelet to initialise ...
	I0203 15:24:49.850593   23025 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 15:24:49.855425   23025 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-8pts4" in "kube-system" namespace to be "Ready" ...
	I0203 15:24:49.860080   23025 pod_ready.go:92] pod "coredns-787d4945fb-8pts4" in "kube-system" namespace has status "Ready":"True"
	I0203 15:24:49.860088   23025 pod_ready.go:81] duration metric: took 4.652942ms waiting for pod "coredns-787d4945fb-8pts4" in "kube-system" namespace to be "Ready" ...
	I0203 15:24:49.860094   23025 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:24:51.872033   23025 pod_ready.go:102] pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"False"
	I0203 15:24:53.873752   23025 pod_ready.go:102] pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"False"
	I0203 15:24:56.371165   23025 pod_ready.go:102] pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"False"
	I0203 15:24:57.870888   23025 pod_ready.go:92] pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:24:57.870904   23025 pod_ready.go:81] duration metric: took 8.011292002s waiting for pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:24:57.870913   23025 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:24:59.886110   23025 pod_ready.go:102] pod "kube-apiserver-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:01.882257   23025 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:25:01.882276   23025 pod_ready.go:81] duration metric: took 4.011492481s waiting for pod "kube-apiserver-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:25:01.882286   23025 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:25:01.903851   23025 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:25:01.903862   23025 pod_ready.go:81] duration metric: took 21.570656ms waiting for pod "kube-controller-manager-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:25:01.903868   23025 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-sd878" in "kube-system" namespace to be "Ready" ...
	I0203 15:25:01.908835   23025 pod_ready.go:92] pod "kube-proxy-sd878" in "kube-system" namespace has status "Ready":"True"
	I0203 15:25:01.908844   23025 pod_ready.go:81] duration metric: took 4.970918ms waiting for pod "kube-proxy-sd878" in "kube-system" namespace to be "Ready" ...
	I0203 15:25:01.908850   23025 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:25:01.913102   23025 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:25:01.913112   23025 pod_ready.go:81] duration metric: took 4.256817ms waiting for pod "kube-scheduler-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:25:01.913119   23025 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace to be "Ready" ...
	I0203 15:25:03.923747   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:05.923983   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:07.924066   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:10.424698   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:12.924383   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:15.424468   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:17.425165   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:19.425283   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:21.922764   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:23.923581   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:25.924335   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:28.425620   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:30.923281   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:32.924566   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:35.423309   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:37.425345   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:39.924703   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:42.424450   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:44.924820   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:46.925279   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:49.426205   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:51.923658   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:53.923813   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:55.925239   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:25:58.424792   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:00.425106   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:02.925546   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:05.424240   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:07.426009   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:09.924639   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:11.926048   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:14.424343   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:16.424699   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:18.426166   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:20.924495   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:22.926359   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:25.426236   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:27.924414   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:29.925664   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:31.926147   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:34.425715   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:36.924830   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:38.925553   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:40.926493   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:43.426140   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:45.426444   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:47.926145   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:50.425422   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:52.926259   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:55.426559   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:26:57.935690   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:00.427085   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:02.926739   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:05.425021   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:07.426000   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:09.426181   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:11.927384   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:14.425616   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:16.426933   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:18.925025   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:20.925841   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:22.926553   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:25.427359   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:27.925891   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:29.927275   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:32.425958   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:34.427568   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:36.925830   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:39.426659   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:41.927771   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:44.425949   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:46.426989   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:48.925543   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:50.927622   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:53.426031   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:55.426163   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:57.427363   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:27:59.928700   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:02.426094   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:04.426539   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:06.427101   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:08.925700   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:10.926493   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:13.427628   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:15.428593   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:17.927661   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:20.426880   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:22.428491   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:24.927651   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:26.928027   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:29.426825   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:31.429267   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:33.927295   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:35.927496   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:37.928379   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:40.428645   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:42.927097   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:44.927494   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:46.928350   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:49.429025   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:51.927267   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:53.927857   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:56.426855   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:28:58.428428   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:00.927344   23025 pod_ready.go:102] pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:01.921518   23025 pod_ready.go:81] duration metric: took 4m0.004360123s waiting for pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace to be "Ready" ...
	E0203 15:29:01.921549   23025 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7997d45854-q8qw5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0203 15:29:01.921574   23025 pod_ready.go:38] duration metric: took 4m12.067573753s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 15:29:01.921606   23025 kubeadm.go:637] restartCluster took 4m28.721148904s
	W0203 15:29:01.921804   23025 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0203 15:29:01.921835   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0203 15:29:06.074676   23025 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.152742132s)
	I0203 15:29:06.074751   23025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 15:29:06.084734   23025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0203 15:29:06.092381   23025 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0203 15:29:06.092430   23025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0203 15:29:06.099975   23025 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0203 15:29:06.100003   23025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0203 15:29:06.148870   23025 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0203 15:29:06.148920   23025 kubeadm.go:322] [preflight] Running pre-flight checks
	I0203 15:29:06.262337   23025 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0203 15:29:06.262418   23025 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0203 15:29:06.262531   23025 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0203 15:29:06.392035   23025 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0203 15:29:06.415859   23025 out.go:204]   - Generating certificates and keys ...
	I0203 15:29:06.415933   23025 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0203 15:29:06.415993   23025 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0203 15:29:06.416127   23025 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0203 15:29:06.416210   23025 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0203 15:29:06.416393   23025 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0203 15:29:06.416444   23025 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0203 15:29:06.416505   23025 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0203 15:29:06.416599   23025 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0203 15:29:06.416718   23025 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0203 15:29:06.416853   23025 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0203 15:29:06.416883   23025 kubeadm.go:322] [certs] Using the existing "sa" key
	I0203 15:29:06.416924   23025 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0203 15:29:06.585518   23025 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0203 15:29:06.672303   23025 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0203 15:29:06.752297   23025 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0203 15:29:06.866676   23025 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0203 15:29:06.877819   23025 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0203 15:29:06.878607   23025 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0203 15:29:06.878699   23025 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0203 15:29:06.954804   23025 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0203 15:29:06.976343   23025 out.go:204]   - Booting up control plane ...
	I0203 15:29:06.976435   23025 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0203 15:29:06.976529   23025 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0203 15:29:06.976595   23025 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0203 15:29:06.976661   23025 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0203 15:29:06.976818   23025 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0203 15:29:11.962683   23025 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002696 seconds
	I0203 15:29:11.962873   23025 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0203 15:29:11.973163   23025 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0203 15:29:12.489805   23025 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0203 15:29:12.489990   23025 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-893000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0203 15:29:12.999479   23025 kubeadm.go:322] [bootstrap-token] Using token: rvszd1.2tpm24p2l05nhnvr
	I0203 15:29:13.037546   23025 out.go:204]   - Configuring RBAC rules ...
	I0203 15:29:13.037778   23025 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0203 15:29:13.079722   23025 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0203 15:29:13.084713   23025 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0203 15:29:13.086925   23025 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0203 15:29:13.089196   23025 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0203 15:29:13.091391   23025 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0203 15:29:13.099469   23025 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0203 15:29:13.244931   23025 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0203 15:29:13.510288   23025 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0203 15:29:13.510341   23025 kubeadm.go:322] 
	I0203 15:29:13.510406   23025 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0203 15:29:13.510421   23025 kubeadm.go:322] 
	I0203 15:29:13.510529   23025 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0203 15:29:13.510541   23025 kubeadm.go:322] 
	I0203 15:29:13.510596   23025 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0203 15:29:13.510727   23025 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0203 15:29:13.510827   23025 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0203 15:29:13.510839   23025 kubeadm.go:322] 
	I0203 15:29:13.510909   23025 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0203 15:29:13.510956   23025 kubeadm.go:322] 
	I0203 15:29:13.510997   23025 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0203 15:29:13.511007   23025 kubeadm.go:322] 
	I0203 15:29:13.511091   23025 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0203 15:29:13.511154   23025 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0203 15:29:13.511243   23025 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0203 15:29:13.511253   23025 kubeadm.go:322] 
	I0203 15:29:13.511361   23025 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0203 15:29:13.511476   23025 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0203 15:29:13.511488   23025 kubeadm.go:322] 
	I0203 15:29:13.511561   23025 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token rvszd1.2tpm24p2l05nhnvr \
	I0203 15:29:13.511688   23025 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bcf6f7e77d0268120f53458c1a4280561e537ee5660c54dcef8b8de8d6430578 \
	I0203 15:29:13.511726   23025 kubeadm.go:322] 	--control-plane 
	I0203 15:29:13.511749   23025 kubeadm.go:322] 
	I0203 15:29:13.511855   23025 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0203 15:29:13.511867   23025 kubeadm.go:322] 
	I0203 15:29:13.511968   23025 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token rvszd1.2tpm24p2l05nhnvr \
	I0203 15:29:13.512119   23025 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bcf6f7e77d0268120f53458c1a4280561e537ee5660c54dcef8b8de8d6430578 
	I0203 15:29:13.516133   23025 kubeadm.go:322] W0203 23:29:06.143516    9140 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0203 15:29:13.516284   23025 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0203 15:29:13.516416   23025 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0203 15:29:13.516429   23025 cni.go:84] Creating CNI manager for ""
	I0203 15:29:13.516440   23025 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 15:29:13.554442   23025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0203 15:29:13.611566   23025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0203 15:29:13.621581   23025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0203 15:29:13.635063   23025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0203 15:29:13.635150   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:13.635156   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=b839c677c13f941c936975b72b386dd12a345761 minikube.k8s.io/name=default-k8s-diff-port-893000 minikube.k8s.io/updated_at=2023_02_03T15_29_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:13.644665   23025 ops.go:34] apiserver oom_adj: -16
	I0203 15:29:13.722924   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:14.288394   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:14.789055   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:15.289018   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:15.786927   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:16.287192   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:16.787421   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:17.287193   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:17.789064   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:18.286988   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:18.789142   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:19.289114   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:19.787154   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:20.288988   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:20.787319   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:21.289033   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:21.787599   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:22.287490   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:22.787614   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:23.287106   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:23.787935   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:24.287992   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:24.788347   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:25.288175   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:25.787702   23025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0203 15:29:25.854901   23025 kubeadm.go:1073] duration metric: took 12.219568586s to wait for elevateKubeSystemPrivileges.
	I0203 15:29:25.854919   23025 kubeadm.go:403] StartCluster complete in 4m52.686364129s
	I0203 15:29:25.854939   23025 settings.go:142] acquiring lock: {Name:mk82a7d24fccbbf9730201facefdc9acc345e8e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:29:25.855045   23025 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 15:29:25.855640   23025 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/kubeconfig: {Name:mkf113f45b09a6304f4248a99f0e16d42a3468fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 15:29:25.855918   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0203 15:29:25.855966   23025 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0203 15:29:25.856025   23025 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-diff-port-893000"
	I0203 15:29:25.856040   23025 addons.go:65] Setting default-storageclass=true in profile "default-k8s-diff-port-893000"
	I0203 15:29:25.856054   23025 addons.go:65] Setting metrics-server=true in profile "default-k8s-diff-port-893000"
	I0203 15:29:25.856061   23025 addons.go:65] Setting dashboard=true in profile "default-k8s-diff-port-893000"
	I0203 15:29:25.856114   23025 addons.go:227] Setting addon dashboard=true in "default-k8s-diff-port-893000"
	I0203 15:29:25.856084   23025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-893000"
	W0203 15:29:25.856121   23025 addons.go:236] addon dashboard should already be in state true
	I0203 15:29:25.856136   23025 config.go:180] Loaded profile config "default-k8s-diff-port-893000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 15:29:25.856045   23025 addons.go:227] Setting addon storage-provisioner=true in "default-k8s-diff-port-893000"
	W0203 15:29:25.856160   23025 addons.go:236] addon storage-provisioner should already be in state true
	I0203 15:29:25.856167   23025 host.go:66] Checking if "default-k8s-diff-port-893000" exists ...
	I0203 15:29:25.856085   23025 addons.go:227] Setting addon metrics-server=true in "default-k8s-diff-port-893000"
	W0203 15:29:25.856186   23025 addons.go:236] addon metrics-server should already be in state true
	I0203 15:29:25.856193   23025 host.go:66] Checking if "default-k8s-diff-port-893000" exists ...
	I0203 15:29:25.856218   23025 host.go:66] Checking if "default-k8s-diff-port-893000" exists ...
	I0203 15:29:25.856476   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:29:25.857719   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:29:25.857876   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:29:25.857894   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:29:25.951044   23025 addons.go:227] Setting addon default-storageclass=true in "default-k8s-diff-port-893000"
	I0203 15:29:26.040468   23025 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0203 15:29:26.040483   23025 addons.go:236] addon default-storageclass should already be in state true
	I0203 15:29:26.003685   23025 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0203 15:29:25.983675   23025 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0203 15:29:26.077756   23025 host.go:66] Checking if "default-k8s-diff-port-893000" exists ...
	I0203 15:29:26.077910   23025 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 15:29:26.114630   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0203 15:29:26.151505   23025 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0203 15:29:26.115551   23025 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-893000 --format={{.State.Status}}
	I0203 15:29:26.123449   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0203 15:29:26.151532   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0203 15:29:26.151626   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:29:26.188551   23025 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0203 15:29:26.188811   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:29:26.226675   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0203 15:29:26.226696   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0203 15:29:26.226802   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:29:26.268877   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:29:26.269052   23025 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0203 15:29:26.269066   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0203 15:29:26.269184   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:29:26.307649   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:29:26.308267   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:29:26.341969   23025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56288 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/default-k8s-diff-port-893000/id_rsa Username:docker}
	I0203 15:29:26.413425   23025 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-893000" context rescaled to 1 replicas
	I0203 15:29:26.413458   23025 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0203 15:29:26.452575   23025 out.go:177] * Verifying Kubernetes components...
	I0203 15:29:26.473817   23025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 15:29:26.532829   23025 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0203 15:29:26.532865   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0203 15:29:26.609671   23025 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0203 15:29:26.609690   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0203 15:29:26.621190   23025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0203 15:29:26.624515   23025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0203 15:29:26.629611   23025 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 15:29:26.629627   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0203 15:29:26.630409   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0203 15:29:26.630425   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0203 15:29:26.719211   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0203 15:29:26.719235   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0203 15:29:26.720386   23025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0203 15:29:26.827005   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0203 15:29:26.827025   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0203 15:29:26.927535   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0203 15:29:26.927558   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0203 15:29:27.018328   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0203 15:29:27.018346   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0203 15:29:27.035944   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0203 15:29:27.035968   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0203 15:29:27.116769   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0203 15:29:27.116794   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0203 15:29:27.133064   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0203 15:29:27.133095   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0203 15:29:27.152644   23025 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0203 15:29:27.152662   23025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0203 15:29:27.219798   23025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0203 15:29:28.013222   23025 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.824552873s)
	I0203 15:29:28.013275   23025 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.539376488s)
	I0203 15:29:28.013279   23025 start.go:919] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0203 15:29:28.013419   23025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-893000
	I0203 15:29:28.080025   23025 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-893000" to be "Ready" ...
	I0203 15:29:28.108823   23025 node_ready.go:49] node "default-k8s-diff-port-893000" has status "Ready":"True"
	I0203 15:29:28.108841   23025 node_ready.go:38] duration metric: took 28.789645ms waiting for node "default-k8s-diff-port-893000" to be "Ready" ...
	I0203 15:29:28.108850   23025 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 15:29:28.121296   23025 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-qhfr6" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:28.235769   23025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.614509497s)
	I0203 15:29:28.235841   23025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.611256708s)
	I0203 15:29:28.235929   23025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.515490151s)
	I0203 15:29:28.235957   23025 addons.go:457] Verifying addon metrics-server=true in "default-k8s-diff-port-893000"
	I0203 15:29:28.534195   23025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.314335355s)
	I0203 15:29:28.560321   23025 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-893000 addons enable metrics-server	
	
	
	I0203 15:29:28.581407   23025 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0203 15:29:28.603278   23025 addons.go:492] enable addons completed in 2.747275233s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0203 15:29:30.137252   23025 pod_ready.go:102] pod "coredns-787d4945fb-qhfr6" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:32.136260   23025 pod_ready.go:92] pod "coredns-787d4945fb-qhfr6" in "kube-system" namespace has status "Ready":"True"
	I0203 15:29:32.136277   23025 pod_ready.go:81] duration metric: took 4.014885553s waiting for pod "coredns-787d4945fb-qhfr6" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.136286   23025 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-ssbml" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.647990   23025 pod_ready.go:92] pod "coredns-787d4945fb-ssbml" in "kube-system" namespace has status "Ready":"True"
	I0203 15:29:32.648007   23025 pod_ready.go:81] duration metric: took 511.704705ms waiting for pod "coredns-787d4945fb-ssbml" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.648014   23025 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.653365   23025 pod_ready.go:92] pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:29:32.653377   23025 pod_ready.go:81] duration metric: took 5.357576ms waiting for pod "etcd-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.653385   23025 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.657622   23025 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:29:32.657631   23025 pod_ready.go:81] duration metric: took 4.240352ms waiting for pod "kube-apiserver-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.657637   23025 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.709773   23025 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:29:32.718341   23025 pod_ready.go:81] duration metric: took 60.693693ms waiting for pod "kube-controller-manager-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.718352   23025 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-np25m" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.932962   23025 pod_ready.go:92] pod "kube-proxy-np25m" in "kube-system" namespace has status "Ready":"True"
	I0203 15:29:32.932974   23025 pod_ready.go:81] duration metric: took 214.613062ms waiting for pod "kube-proxy-np25m" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:32.932981   23025 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:33.333120   23025 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-893000" in "kube-system" namespace has status "Ready":"True"
	I0203 15:29:33.333133   23025 pod_ready.go:81] duration metric: took 400.139338ms waiting for pod "kube-scheduler-default-k8s-diff-port-893000" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:33.333141   23025 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace to be "Ready" ...
	I0203 15:29:35.740062   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:37.741560   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:40.242286   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:42.741469   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:45.240935   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:47.743086   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:50.242390   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:52.741531   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:55.241202   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:29:57.739803   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:00.240726   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:02.742821   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:05.243310   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:07.740465   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:10.242879   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:12.742143   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:14.742870   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:17.244408   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:19.742495   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:21.742822   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:24.242286   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:26.242759   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:28.741411   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:31.241227   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:33.241451   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:35.743627   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:38.242128   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:40.740664   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:42.742476   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:44.743062   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:47.242798   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:49.741976   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:51.743241   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:54.242513   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:56.243887   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:30:58.740682   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:00.741039   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:02.744371   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:05.244111   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:07.742768   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:10.243133   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:12.744008   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:15.244108   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:17.244756   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:19.742728   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:22.243269   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:24.744083   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:26.744533   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:29.242058   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:31.242512   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:33.741925   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:35.744684   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:38.243108   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:40.743462   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:42.744075   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:45.243664   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:47.245665   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:49.743131   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:51.743250   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:54.242065   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:56.245209   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:31:58.743014   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:00.745437   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:03.242413   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:05.243039   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:07.244868   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:09.743807   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:11.744450   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:14.243374   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:16.245202   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:18.743177   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:20.745422   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:23.245401   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:25.245688   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:27.744792   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:29.745738   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:32.244373   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:34.245428   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:36.743940   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:38.744190   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:40.745502   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:43.245421   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:45.744012   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:48.244175   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:50.745867   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:53.245253   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:55.246136   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:57.744402   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:32:59.745506   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:02.245416   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:04.744458   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:07.245516   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:09.745066   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:12.243257   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:14.243714   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:16.245630   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:18.246759   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:20.745506   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:23.244091   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:25.246269   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:27.744908   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:30.248067   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:32.745212   23025 pod_ready.go:102] pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace has status "Ready":"False"
	I0203 15:33:33.751451   23025 pod_ready.go:81] duration metric: took 4m0.413530285s waiting for pod "metrics-server-7997d45854-k2zx4" in "kube-system" namespace to be "Ready" ...
	E0203 15:33:33.751464   23025 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0203 15:33:33.751467   23025 pod_ready.go:38] duration metric: took 4m5.637722225s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0203 15:33:33.751484   23025 api_server.go:51] waiting for apiserver process to appear ...
	I0203 15:33:33.751567   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:33:33.775938   23025 logs.go:279] 1 containers: [edbb479721ec]
	I0203 15:33:33.776023   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:33:33.800156   23025 logs.go:279] 1 containers: [739385fd8c84]
	I0203 15:33:33.800236   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:33:33.823985   23025 logs.go:279] 1 containers: [faf1c5c0cf33]
	I0203 15:33:33.824060   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:33:33.847701   23025 logs.go:279] 1 containers: [6e9451071668]
	I0203 15:33:33.847782   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:33:33.871583   23025 logs.go:279] 1 containers: [b5ae1b200b55]
	I0203 15:33:33.871666   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:33:33.895560   23025 logs.go:279] 1 containers: [c81836ab6f88]
	I0203 15:33:33.895651   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:33:33.921056   23025 logs.go:279] 1 containers: [c41857685ca5]
	I0203 15:33:33.921147   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:33:33.946999   23025 logs.go:279] 1 containers: [0e2de7d47b13]
	I0203 15:33:33.947017   23025 logs.go:124] Gathering logs for kube-apiserver [edbb479721ec] ...
	I0203 15:33:33.947024   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edbb479721ec"
	I0203 15:33:33.978692   23025 logs.go:124] Gathering logs for kube-scheduler [6e9451071668] ...
	I0203 15:33:33.978706   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e9451071668"
	I0203 15:33:34.010812   23025 logs.go:124] Gathering logs for kube-proxy [b5ae1b200b55] ...
	I0203 15:33:34.010830   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ae1b200b55"
	I0203 15:33:34.037595   23025 logs.go:124] Gathering logs for kubernetes-dashboard [c81836ab6f88] ...
	I0203 15:33:34.037616   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c81836ab6f88"
	I0203 15:33:34.064190   23025 logs.go:124] Gathering logs for storage-provisioner [c41857685ca5] ...
	I0203 15:33:34.064203   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41857685ca5"
	I0203 15:33:34.090283   23025 logs.go:124] Gathering logs for kubelet ...
	I0203 15:33:34.090299   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:33:34.165465   23025 logs.go:124] Gathering logs for dmesg ...
	I0203 15:33:34.165480   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:33:34.178333   23025 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:33:34.178348   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 15:33:34.271397   23025 logs.go:124] Gathering logs for container status ...
	I0203 15:33:34.271413   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:33:34.313275   23025 logs.go:124] Gathering logs for Docker ...
	I0203 15:33:34.313290   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:33:34.336187   23025 logs.go:124] Gathering logs for etcd [739385fd8c84] ...
	I0203 15:33:34.336203   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 739385fd8c84"
	I0203 15:33:34.366939   23025 logs.go:124] Gathering logs for coredns [faf1c5c0cf33] ...
	I0203 15:33:34.366957   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1c5c0cf33"
	I0203 15:33:34.393136   23025 logs.go:124] Gathering logs for kube-controller-manager [0e2de7d47b13] ...
	I0203 15:33:34.393152   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2de7d47b13"
	I0203 15:33:36.931658   23025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 15:33:36.942845   23025 api_server.go:71] duration metric: took 4m10.524387666s to wait for apiserver process to appear ...
	I0203 15:33:36.942861   23025 api_server.go:87] waiting for apiserver healthz status ...
	I0203 15:33:36.942942   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:33:36.967669   23025 logs.go:279] 1 containers: [edbb479721ec]
	I0203 15:33:36.967751   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:33:36.991376   23025 logs.go:279] 1 containers: [739385fd8c84]
	I0203 15:33:36.991453   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:33:37.015815   23025 logs.go:279] 1 containers: [faf1c5c0cf33]
	I0203 15:33:37.015895   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:33:37.039346   23025 logs.go:279] 1 containers: [6e9451071668]
	I0203 15:33:37.039430   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:33:37.062802   23025 logs.go:279] 1 containers: [b5ae1b200b55]
	I0203 15:33:37.062891   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:33:37.087082   23025 logs.go:279] 1 containers: [c81836ab6f88]
	I0203 15:33:37.087167   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:33:37.110211   23025 logs.go:279] 1 containers: [c41857685ca5]
	I0203 15:33:37.110302   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:33:37.133680   23025 logs.go:279] 1 containers: [0e2de7d47b13]
	I0203 15:33:37.133700   23025 logs.go:124] Gathering logs for coredns [faf1c5c0cf33] ...
	I0203 15:33:37.133709   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1c5c0cf33"
	I0203 15:33:37.159689   23025 logs.go:124] Gathering logs for kube-proxy [b5ae1b200b55] ...
	I0203 15:33:37.159705   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ae1b200b55"
	I0203 15:33:37.184977   23025 logs.go:124] Gathering logs for kubernetes-dashboard [c81836ab6f88] ...
	I0203 15:33:37.184990   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c81836ab6f88"
	I0203 15:33:37.210814   23025 logs.go:124] Gathering logs for kube-controller-manager [0e2de7d47b13] ...
	I0203 15:33:37.210831   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2de7d47b13"
	I0203 15:33:37.247635   23025 logs.go:124] Gathering logs for container status ...
	I0203 15:33:37.247653   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:33:37.278134   23025 logs.go:124] Gathering logs for etcd [739385fd8c84] ...
	I0203 15:33:37.278148   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 739385fd8c84"
	I0203 15:33:37.307763   23025 logs.go:124] Gathering logs for dmesg ...
	I0203 15:33:37.307776   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:33:37.319922   23025 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:33:37.319936   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 15:33:37.402430   23025 logs.go:124] Gathering logs for kube-apiserver [edbb479721ec] ...
	I0203 15:33:37.402446   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edbb479721ec"
	I0203 15:33:37.434886   23025 logs.go:124] Gathering logs for kube-scheduler [6e9451071668] ...
	I0203 15:33:37.434901   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e9451071668"
	I0203 15:33:37.467387   23025 logs.go:124] Gathering logs for storage-provisioner [c41857685ca5] ...
	I0203 15:33:37.467402   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41857685ca5"
	I0203 15:33:37.493762   23025 logs.go:124] Gathering logs for Docker ...
	I0203 15:33:37.493778   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:33:37.515286   23025 logs.go:124] Gathering logs for kubelet ...
	I0203 15:33:37.515300   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:33:40.091821   23025 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56287/healthz ...
	I0203 15:33:40.099413   23025 api_server.go:278] https://127.0.0.1:56287/healthz returned 200:
	ok
	I0203 15:33:40.100891   23025 api_server.go:140] control plane version: v1.26.1
	I0203 15:33:40.100900   23025 api_server.go:130] duration metric: took 3.157972557s to wait for apiserver health ...
	I0203 15:33:40.100905   23025 system_pods.go:43] waiting for kube-system pods to appear ...
	I0203 15:33:40.100972   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0203 15:33:40.125617   23025 logs.go:279] 1 containers: [edbb479721ec]
	I0203 15:33:40.125710   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0203 15:33:40.149257   23025 logs.go:279] 1 containers: [739385fd8c84]
	I0203 15:33:40.149339   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0203 15:33:40.172659   23025 logs.go:279] 1 containers: [faf1c5c0cf33]
	I0203 15:33:40.172745   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0203 15:33:40.196956   23025 logs.go:279] 1 containers: [6e9451071668]
	I0203 15:33:40.197042   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0203 15:33:40.221298   23025 logs.go:279] 1 containers: [b5ae1b200b55]
	I0203 15:33:40.221398   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0203 15:33:40.243876   23025 logs.go:279] 1 containers: [c81836ab6f88]
	I0203 15:33:40.243961   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0203 15:33:40.267549   23025 logs.go:279] 1 containers: [c41857685ca5]
	I0203 15:33:40.267635   23025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0203 15:33:40.290718   23025 logs.go:279] 1 containers: [0e2de7d47b13]
	I0203 15:33:40.290736   23025 logs.go:124] Gathering logs for Docker ...
	I0203 15:33:40.290744   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0203 15:33:40.312213   23025 logs.go:124] Gathering logs for kubelet ...
	I0203 15:33:40.312227   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0203 15:33:40.388756   23025 logs.go:124] Gathering logs for describe nodes ...
	I0203 15:33:40.388772   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0203 15:33:40.470645   23025 logs.go:124] Gathering logs for storage-provisioner [c41857685ca5] ...
	I0203 15:33:40.470660   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c41857685ca5"
	I0203 15:33:40.496249   23025 logs.go:124] Gathering logs for kube-controller-manager [0e2de7d47b13] ...
	I0203 15:33:40.496264   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e2de7d47b13"
	I0203 15:33:40.534473   23025 logs.go:124] Gathering logs for kube-scheduler [6e9451071668] ...
	I0203 15:33:40.534488   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e9451071668"
	I0203 15:33:40.567822   23025 logs.go:124] Gathering logs for kube-proxy [b5ae1b200b55] ...
	I0203 15:33:40.567838   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5ae1b200b55"
	I0203 15:33:40.593727   23025 logs.go:124] Gathering logs for kubernetes-dashboard [c81836ab6f88] ...
	I0203 15:33:40.593742   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c81836ab6f88"
	I0203 15:33:40.620998   23025 logs.go:124] Gathering logs for container status ...
	I0203 15:33:40.621014   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0203 15:33:40.651397   23025 logs.go:124] Gathering logs for dmesg ...
	I0203 15:33:40.651415   23025 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0203 15:33:40.663393   23025 logs.go:124] Gathering logs for kube-apiserver [edbb479721ec] ...
	I0203 15:33:40.663408   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 edbb479721ec"
	I0203 15:33:40.694221   23025 logs.go:124] Gathering logs for etcd [739385fd8c84] ...
	I0203 15:33:40.694238   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 739385fd8c84"
	I0203 15:33:40.725183   23025 logs.go:124] Gathering logs for coredns [faf1c5c0cf33] ...
	I0203 15:33:40.725198   23025 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf1c5c0cf33"
	I0203 15:33:43.257470   23025 system_pods.go:59] 8 kube-system pods found
	I0203 15:33:43.257484   23025 system_pods.go:61] "coredns-787d4945fb-ssbml" [c4fb69fd-c0bb-4907-8a55-30921ce94a98] Running
	I0203 15:33:43.257488   23025 system_pods.go:61] "etcd-default-k8s-diff-port-893000" [0164206d-f409-421b-b968-d913d478c32f] Running
	I0203 15:33:43.257491   23025 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-893000" [232177e7-bc0a-49e1-8800-db098bf95b57] Running
	I0203 15:33:43.257495   23025 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-893000" [a9093dee-30c9-450b-a2a7-ff428721b93b] Running
	I0203 15:33:43.257499   23025 system_pods.go:61] "kube-proxy-np25m" [31e81079-747b-4e09-a64a-4518b1a5c0e1] Running
	I0203 15:33:43.257503   23025 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-893000" [571e075e-d50e-432b-bb6e-6f9e6933aeab] Running
	I0203 15:33:43.257508   23025 system_pods.go:61] "metrics-server-7997d45854-k2zx4" [9b1ca4e5-0c2f-43be-b389-7a09b5f30cc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 15:33:43.257513   23025 system_pods.go:61] "storage-provisioner" [6a269ff8-3718-4ffd-8cac-5263d0ffdcd2] Running
	I0203 15:33:43.257518   23025 system_pods.go:74] duration metric: took 3.156546962s to wait for pod list to return data ...
	I0203 15:33:43.257524   23025 default_sa.go:34] waiting for default service account to be created ...
	I0203 15:33:43.259764   23025 default_sa.go:45] found service account: "default"
	I0203 15:33:43.259773   23025 default_sa.go:55] duration metric: took 2.243977ms for default service account to be created ...
	I0203 15:33:43.259777   23025 system_pods.go:116] waiting for k8s-apps to be running ...
	I0203 15:33:43.265008   23025 system_pods.go:86] 8 kube-system pods found
	I0203 15:33:43.265021   23025 system_pods.go:89] "coredns-787d4945fb-ssbml" [c4fb69fd-c0bb-4907-8a55-30921ce94a98] Running
	I0203 15:33:43.265026   23025 system_pods.go:89] "etcd-default-k8s-diff-port-893000" [0164206d-f409-421b-b968-d913d478c32f] Running
	I0203 15:33:43.265030   23025 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-893000" [232177e7-bc0a-49e1-8800-db098bf95b57] Running
	I0203 15:33:43.265034   23025 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-893000" [a9093dee-30c9-450b-a2a7-ff428721b93b] Running
	I0203 15:33:43.265037   23025 system_pods.go:89] "kube-proxy-np25m" [31e81079-747b-4e09-a64a-4518b1a5c0e1] Running
	I0203 15:33:43.265043   23025 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-893000" [571e075e-d50e-432b-bb6e-6f9e6933aeab] Running
	I0203 15:33:43.265049   23025 system_pods.go:89] "metrics-server-7997d45854-k2zx4" [9b1ca4e5-0c2f-43be-b389-7a09b5f30cc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0203 15:33:43.265055   23025 system_pods.go:89] "storage-provisioner" [6a269ff8-3718-4ffd-8cac-5263d0ffdcd2] Running
	I0203 15:33:43.265059   23025 system_pods.go:126] duration metric: took 5.277882ms to wait for k8s-apps to be running ...
	I0203 15:33:43.265064   23025 system_svc.go:44] waiting for kubelet service to be running ....
	I0203 15:33:43.265127   23025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 15:33:43.275233   23025 system_svc.go:56] duration metric: took 10.164105ms WaitForService to wait for kubelet.
	I0203 15:33:43.275244   23025 kubeadm.go:578] duration metric: took 4m16.856666482s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0203 15:33:43.275261   23025 node_conditions.go:102] verifying NodePressure condition ...
	I0203 15:33:43.278261   23025 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0203 15:33:43.278270   23025 node_conditions.go:123] node cpu capacity is 6
	I0203 15:33:43.278277   23025 node_conditions.go:105] duration metric: took 3.011891ms to run NodePressure ...
	I0203 15:33:43.278285   23025 start.go:228] waiting for startup goroutines ...
	I0203 15:33:43.278293   23025 start.go:233] waiting for cluster config update ...
	I0203 15:33:43.278302   23025 start.go:240] writing updated cluster config ...
	I0203 15:33:43.278684   23025 ssh_runner.go:195] Run: rm -f paused
	I0203 15:33:43.316790   23025 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0203 15:33:43.359941   23025 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-893000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-02-03 23:06:49 UTC, end at Fri 2023-02-03 23:33:52 UTC. --
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Started Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Stopping Docker Application Container Engine...
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.539155107Z" level=info msg="Processing signal 'terminated'"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.540022579Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[437]: time="2023-02-03T23:06:52.540232660Z" level=info msg="Daemon shutdown complete"
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: docker.service: Succeeded.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Stopped Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Starting Docker Application Container Engine...
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.587557248Z" level=info msg="Starting up"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589324775Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589361076Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589385186Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.589394737Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590574981Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590616786Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590634858Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.590645110Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.597659541Z" level=info msg="Loading containers: start."
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.674141602Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.707135159Z" level=info msg="Loading containers: done."
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.715870675Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.715965108Z" level=info msg="Daemon has completed initialization"
	Feb 03 23:06:52 old-k8s-version-136000 systemd[1]: Started Docker Application Container Engine.
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.736641748Z" level=info msg="API listen on [::]:2376"
	Feb 03 23:06:52 old-k8s-version-136000 dockerd[623]: time="2023-02-03T23:06:52.743050535Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-02-03T23:33:54Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:33:54 up  1:33,  0 users,  load average: 1.19, 0.88, 0.90
	Linux old-k8s-version-136000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-02-03 23:06:49 UTC, end at Fri 2023-02-03 23:33:55 UTC. --
	Feb 03 23:33:53 old-k8s-version-136000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34719]: I0203 23:33:54.203555   34719 server.go:410] Version: v1.16.0
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34719]: I0203 23:33:54.203860   34719 plugins.go:100] No cloud provider specified.
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34719]: I0203 23:33:54.203872   34719 server.go:773] Client rotation is on, will bootstrap in background
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34719]: I0203 23:33:54.205648   34719 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34719]: W0203 23:33:54.206550   34719 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34719]: W0203 23:33:54.206626   34719 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34719]: F0203 23:33:54.206654   34719 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34758]: I0203 23:33:54.949158   34758 server.go:410] Version: v1.16.0
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34758]: I0203 23:33:54.949409   34758 plugins.go:100] No cloud provider specified.
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34758]: I0203 23:33:54.949420   34758 server.go:773] Client rotation is on, will bootstrap in background
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34758]: I0203 23:33:54.951334   34758 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34758]: W0203 23:33:54.952117   34758 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34758]: W0203 23:33:54.952193   34758 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Feb 03 23:33:54 old-k8s-version-136000 kubelet[34758]: F0203 23:33:54.952222   34758 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 03 23:33:54 old-k8s-version-136000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 15:33:54.800032   23777 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 2 (445.529771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-136000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (555.00s)

                                                
                                    

Test pass (274/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.16
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.26.1/json-events 6.94
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.72
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 13.66
19 TestBinaryMirror 1.66
20 TestOffline 52.12
22 TestAddons/Setup 151.21
26 TestAddons/parallel/MetricsServer 5.72
27 TestAddons/parallel/HelmTiller 12
29 TestAddons/parallel/CSI 37.81
30 TestAddons/parallel/Headlamp 11.4
31 TestAddons/parallel/CloudSpanner 5.46
34 TestAddons/serial/GCPAuth/Namespaces 0.1
35 TestAddons/StoppedEnableDisable 11.48
36 TestCertOptions 43.33
37 TestCertExpiration 234.7
38 TestDockerFlags 38.79
39 TestForceSystemdFlag 40.66
40 TestForceSystemdEnv 37.01
42 TestHyperKitDriverInstallOrUpdate 9.25
45 TestErrorSpam/setup 34.44
46 TestErrorSpam/start 2.47
47 TestErrorSpam/status 1.25
48 TestErrorSpam/pause 1.84
49 TestErrorSpam/unpause 1.83
50 TestErrorSpam/stop 11.54
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 47.43
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 44.7
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 7.14
62 TestFunctional/serial/CacheCmd/cache/add_local 1.68
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.63
67 TestFunctional/serial/CacheCmd/cache/delete 0.16
68 TestFunctional/serial/MinikubeKubectlCmd 0.53
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.68
70 TestFunctional/serial/ExtraConfig 43.29
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 3.18
73 TestFunctional/serial/LogsFileCmd 3.28
75 TestFunctional/parallel/ConfigCmd 0.5
76 TestFunctional/parallel/DashboardCmd 13.77
77 TestFunctional/parallel/DryRun 1.61
78 TestFunctional/parallel/InternationalLanguage 0.84
79 TestFunctional/parallel/StatusCmd 1.31
82 TestFunctional/parallel/ServiceCmd 19.96
84 TestFunctional/parallel/AddonsCmd 0.3
85 TestFunctional/parallel/PersistentVolumeClaim 25.05
87 TestFunctional/parallel/SSHCmd 0.81
88 TestFunctional/parallel/CpCmd 2.33
89 TestFunctional/parallel/MySQL 26.57
90 TestFunctional/parallel/FileSync 0.43
91 TestFunctional/parallel/CertSync 2.81
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
99 TestFunctional/parallel/License 0.46
100 TestFunctional/parallel/Version/short 0.13
101 TestFunctional/parallel/Version/components 0.72
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
106 TestFunctional/parallel/ImageCommands/ImageBuild 3.41
107 TestFunctional/parallel/ImageCommands/Setup 2.36
108 TestFunctional/parallel/DockerEnv/bash 1.87
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.73
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.45
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.36
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.58
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.76
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.13
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.87
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.59
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.14
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
130 TestFunctional/parallel/ProfileCmd/profile_list 0.5
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
132 TestFunctional/parallel/MountCmd/any-port 8.72
133 TestFunctional/parallel/MountCmd/specific-port 2.45
134 TestFunctional/delete_addon-resizer_images 0.15
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
140 TestImageBuild/serial/NormalBuild 2.2
141 TestImageBuild/serial/BuildWithBuildArg 0.93
142 TestImageBuild/serial/BuildWithDockerIgnore 0.48
143 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.41
153 TestJSONOutput/start/Command 44.62
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 0.65
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.64
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 10.84
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.78
178 TestKicCustomNetwork/create_custom_network 31.29
179 TestKicCustomNetwork/use_default_bridge_network 31.84
180 TestKicExistingNetwork 31.84
181 TestKicCustomSubnet 32.59
182 TestKicStaticIP 35.06
183 TestMainNoArgs 0.08
184 TestMinikubeProfile 69.87
187 TestMountStart/serial/StartWithMountFirst 8.03
188 TestMountStart/serial/VerifyMountFirst 0.4
189 TestMountStart/serial/StartWithMountSecond 8.04
190 TestMountStart/serial/VerifyMountSecond 0.4
191 TestMountStart/serial/DeleteFirst 2.14
192 TestMountStart/serial/VerifyMountPostDelete 0.4
193 TestMountStart/serial/Stop 1.59
194 TestMountStart/serial/RestartStopped 5.9
195 TestMountStart/serial/VerifyMountPostStop 0.4
198 TestMultiNode/serial/FreshStart2Nodes 79.87
199 TestMultiNode/serial/DeployApp2Nodes 7.67
200 TestMultiNode/serial/PingHostFrom2Pods 0.92
201 TestMultiNode/serial/AddNode 22.7
202 TestMultiNode/serial/ProfileList 0.45
203 TestMultiNode/serial/CopyFile 14.76
204 TestMultiNode/serial/StopNode 3.07
205 TestMultiNode/serial/StartAfterStop 10.19
206 TestMultiNode/serial/RestartKeepsNodes 113.99
207 TestMultiNode/serial/DeleteNode 6.11
208 TestMultiNode/serial/StopMultiNode 22.02
209 TestMultiNode/serial/RestartMultiNode 50.32
210 TestMultiNode/serial/ValidateNameConflict 33.74
214 TestPreload 122.57
216 TestScheduledStopUnix 103.8
217 TestSkaffold 61.91
219 TestInsufficientStorage 14.44
235 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.24
236 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.26
237 TestStoppedBinaryUpgrade/Setup 0.92
239 TestStoppedBinaryUpgrade/MinikubeLogs 3.53
241 TestPause/serial/Start 46.03
242 TestPause/serial/SecondStartNoReconfiguration 44
243 TestPause/serial/Pause 0.71
244 TestPause/serial/VerifyStatus 0.42
245 TestPause/serial/Unpause 0.62
246 TestPause/serial/PauseAgain 0.82
247 TestPause/serial/DeletePaused 2.63
248 TestPause/serial/VerifyDeletedResources 0.56
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
258 TestNoKubernetes/serial/StartWithK8s 30.92
259 TestNoKubernetes/serial/StartWithStopK8s 8.72
260 TestNoKubernetes/serial/Start 6.97
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
262 TestNoKubernetes/serial/ProfileList 1.87
263 TestNoKubernetes/serial/Stop 1.59
264 TestNoKubernetes/serial/StartNoArgs 5.79
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.49
266 TestNetworkPlugins/group/auto/Start 52.82
267 TestNetworkPlugins/group/auto/KubeletFlags 0.41
268 TestNetworkPlugins/group/auto/NetCatPod 15.21
269 TestNetworkPlugins/group/auto/DNS 0.13
270 TestNetworkPlugins/group/auto/Localhost 0.12
271 TestNetworkPlugins/group/auto/HairPin 0.12
272 TestNetworkPlugins/group/kindnet/Start 51.86
273 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
274 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
275 TestNetworkPlugins/group/kindnet/NetCatPod 15.19
276 TestNetworkPlugins/group/kindnet/DNS 0.13
277 TestNetworkPlugins/group/kindnet/Localhost 0.12
278 TestNetworkPlugins/group/kindnet/HairPin 0.12
279 TestNetworkPlugins/group/flannel/Start 58.59
280 TestNetworkPlugins/group/flannel/ControllerPod 5.02
281 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
282 TestNetworkPlugins/group/flannel/NetCatPod 14.23
283 TestNetworkPlugins/group/flannel/DNS 0.16
284 TestNetworkPlugins/group/flannel/Localhost 0.13
285 TestNetworkPlugins/group/flannel/HairPin 0.13
286 TestNetworkPlugins/group/enable-default-cni/Start 51.83
287 TestNetworkPlugins/group/bridge/Start 47.15
288 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
289 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.19
290 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
291 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
292 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
293 TestNetworkPlugins/group/bridge/KubeletFlags 0.48
294 TestNetworkPlugins/group/bridge/NetCatPod 15.23
295 TestNetworkPlugins/group/kubenet/Start 45.56
296 TestNetworkPlugins/group/bridge/DNS 0.13
297 TestNetworkPlugins/group/bridge/Localhost 0.12
298 TestNetworkPlugins/group/bridge/HairPin 0.12
299 TestNetworkPlugins/group/custom-flannel/Start 62.77
300 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
301 TestNetworkPlugins/group/kubenet/NetCatPod 15.2
302 TestNetworkPlugins/group/kubenet/DNS 0.14
303 TestNetworkPlugins/group/kubenet/Localhost 0.13
304 TestNetworkPlugins/group/kubenet/HairPin 0.12
305 TestNetworkPlugins/group/calico/Start 75.86
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.5
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.27
308 TestNetworkPlugins/group/custom-flannel/DNS 0.14
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
311 TestNetworkPlugins/group/false/Start 55.29
312 TestNetworkPlugins/group/calico/ControllerPod 5.02
313 TestNetworkPlugins/group/calico/KubeletFlags 0.41
314 TestNetworkPlugins/group/calico/NetCatPod 19.2
315 TestNetworkPlugins/group/false/KubeletFlags 0.42
316 TestNetworkPlugins/group/false/NetCatPod 29.24
317 TestNetworkPlugins/group/calico/DNS 0.13
318 TestNetworkPlugins/group/calico/Localhost 0.13
319 TestNetworkPlugins/group/calico/HairPin 0.11
322 TestNetworkPlugins/group/false/DNS 0.16
323 TestNetworkPlugins/group/false/Localhost 0.14
324 TestNetworkPlugins/group/false/HairPin 0.15
326 TestStartStop/group/no-preload/serial/FirstStart 58.72
327 TestStartStop/group/no-preload/serial/DeployApp 10.27
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.77
329 TestStartStop/group/no-preload/serial/Stop 10.91
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.39
331 TestStartStop/group/no-preload/serial/SecondStart 555.44
334 TestStartStop/group/old-k8s-version/serial/Stop 1.57
335 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
340 TestStartStop/group/no-preload/serial/Pause 3.2
342 TestStartStop/group/embed-certs/serial/FirstStart 53.69
343 TestStartStop/group/embed-certs/serial/DeployApp 10.28
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
345 TestStartStop/group/embed-certs/serial/Stop 10.88
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.39
347 TestStartStop/group/embed-certs/serial/SecondStart 553.07
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
352 TestStartStop/group/embed-certs/serial/Pause 3.26
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.71
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.97
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.4
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 556.2
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.74
366 TestStartStop/group/newest-cni/serial/FirstStart 44.78
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
369 TestStartStop/group/newest-cni/serial/Stop 10.91
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
371 TestStartStop/group/newest-cni/serial/SecondStart 24.83
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.43
375 TestStartStop/group/newest-cni/serial/Pause 3.22
x
+
TestDownloadOnly/v1.16.0/json-events (16.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-381000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-381000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (16.161219621s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-381000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-381000: exit status 85 (295.937525ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-381000 | jenkins | v1.29.0 | 03 Feb 23 14:07 PST |          |
	|         | -p download-only-381000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 14:07:41
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 14:07:41.320505    2570 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:07:41.320681    2570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:07:41.320686    2570 out.go:309] Setting ErrFile to fd 2...
	I0203 14:07:41.320690    2570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:07:41.320791    2570 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	W0203 14:07:41.320903    2570 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15770-1719/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15770-1719/.minikube/config/config.json: no such file or directory
	I0203 14:07:41.321615    2570 out.go:303] Setting JSON to true
	I0203 14:07:41.340315    2570 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":436,"bootTime":1675461625,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 14:07:41.340407    2570 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 14:07:41.363364    2570 out.go:97] [download-only-381000] minikube v1.29.0 on Darwin 13.2
	I0203 14:07:41.363610    2570 notify.go:220] Checking for updates...
	I0203 14:07:41.383995    2570 out.go:169] MINIKUBE_LOCATION=15770
	W0203 14:07:41.363626    2570 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball: no such file or directory
	I0203 14:07:41.405906    2570 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 14:07:41.448945    2570 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 14:07:41.470020    2570 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 14:07:41.491168    2570 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	W0203 14:07:41.534934    2570 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 14:07:41.535397    2570 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 14:07:41.596050    2570 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 14:07:41.596156    2570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:07:41.741481    2570 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:50 SystemTime:2023-02-03 22:07:41.644082578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:07:41.763569    2570 out.go:97] Using the docker driver based on user configuration
	I0203 14:07:41.763684    2570 start.go:296] selected driver: docker
	I0203 14:07:41.763703    2570 start.go:857] validating driver "docker" against <nil>
	I0203 14:07:41.763924    2570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:07:41.909943    2570 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:50 SystemTime:2023-02-03 22:07:41.814300355 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:07:41.910069    2570 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0203 14:07:41.914201    2570 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0203 14:07:41.914368    2570 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0203 14:07:41.935355    2570 out.go:169] Using Docker Desktop driver with root privileges
	I0203 14:07:41.956370    2570 cni.go:84] Creating CNI manager for ""
	I0203 14:07:41.956407    2570 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0203 14:07:41.956426    2570 start_flags.go:319] config:
	{Name:download-only-381000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-381000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:07:41.978477    2570 out.go:97] Starting control plane node download-only-381000 in cluster download-only-381000
	I0203 14:07:41.978584    2570 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 14:07:42.000424    2570 out.go:97] Pulling base image ...
	I0203 14:07:42.000512    2570 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 14:07:42.000616    2570 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 14:07:42.056262    2570 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 to local cache
	I0203 14:07:42.056538    2570 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local cache directory
	I0203 14:07:42.056670    2570 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 to local cache
	I0203 14:07:42.061461    2570 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0203 14:07:42.061474    2570 cache.go:57] Caching tarball of preloaded images
	I0203 14:07:42.061621    2570 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 14:07:42.083268    2570 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0203 14:07:42.083347    2570 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0203 14:07:42.167818    2570 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0203 14:07:46.596818    2570 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0203 14:07:46.597010    2570 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0203 14:07:47.198968    2570 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0203 14:07:47.199203    2570 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/download-only-381000/config.json ...
	I0203 14:07:47.199228    2570 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/download-only-381000/config.json: {Name:mkc631ce3627f3e05feb83f2ecee21453359874a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0203 14:07:47.199508    2570 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0203 14:07:47.199781    2570 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-381000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (6.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-381000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-381000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (6.93579856s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (6.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-381000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-381000: exit status 85 (291.253724ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-381000 | jenkins | v1.29.0 | 03 Feb 23 14:07 PST |          |
	|         | -p download-only-381000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-381000 | jenkins | v1.29.0 | 03 Feb 23 14:07 PST |          |
	|         | -p download-only-381000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/03 14:07:57
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0203 14:07:57.782475    2612 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:07:57.782732    2612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:07:57.782738    2612 out.go:309] Setting ErrFile to fd 2...
	I0203 14:07:57.782742    2612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:07:57.782851    2612 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	W0203 14:07:57.782954    2612 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15770-1719/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15770-1719/.minikube/config/config.json: no such file or directory
	I0203 14:07:57.783309    2612 out.go:303] Setting JSON to true
	I0203 14:07:57.801791    2612 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":452,"bootTime":1675461625,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 14:07:57.801897    2612 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 14:07:57.823964    2612 out.go:97] [download-only-381000] minikube v1.29.0 on Darwin 13.2
	I0203 14:07:57.824174    2612 notify.go:220] Checking for updates...
	I0203 14:07:57.845481    2612 out.go:169] MINIKUBE_LOCATION=15770
	I0203 14:07:57.866517    2612 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 14:07:57.888695    2612 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 14:07:57.909846    2612 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 14:07:57.931731    2612 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	W0203 14:07:57.975781    2612 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0203 14:07:57.976469    2612 config.go:180] Loaded profile config "download-only-381000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0203 14:07:57.976561    2612 start.go:765] api.Load failed for download-only-381000: filestore "download-only-381000": Docker machine "download-only-381000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0203 14:07:57.976661    2612 driver.go:365] Setting default libvirt URI to qemu:///system
	W0203 14:07:57.976697    2612 start.go:765] api.Load failed for download-only-381000: filestore "download-only-381000": Docker machine "download-only-381000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0203 14:07:58.037272    2612 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 14:07:58.037388    2612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:07:58.180516    2612 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:50 SystemTime:2023-02-03 22:07:58.087509802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:07:58.202346    2612 out.go:97] Using the docker driver based on existing profile
	I0203 14:07:58.202419    2612 start.go:296] selected driver: docker
	I0203 14:07:58.202435    2612 start.go:857] validating driver "docker" against &{Name:download-only-381000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-381000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0203 14:07:58.202794    2612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:07:58.342717    2612 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:50 SystemTime:2023-02-03 22:07:58.252484081 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:07:58.345140    2612 cni.go:84] Creating CNI manager for ""
	I0203 14:07:58.345169    2612 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0203 14:07:58.345184    2612 start_flags.go:319] config:
	{Name:download-only-381000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-381000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:07:58.366547    2612 out.go:97] Starting control plane node download-only-381000 in cluster download-only-381000
	I0203 14:07:58.366678    2612 cache.go:120] Beginning downloading kic base image for docker with docker
	I0203 14:07:58.388374    2612 out.go:97] Pulling base image ...
	I0203 14:07:58.388468    2612 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 14:07:58.388565    2612 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local docker daemon
	I0203 14:07:58.441648    2612 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0203 14:07:58.441668    2612 cache.go:57] Caching tarball of preloaded images
	I0203 14:07:58.441898    2612 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0203 14:07:58.463337    2612 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0203 14:07:58.463408    2612 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0203 14:07:58.468302    2612 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 to local cache
	I0203 14:07:58.468396    2612 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local cache directory
	I0203 14:07:58.468415    2612 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 in local cache directory, skipping pull
	I0203 14:07:58.468421    2612 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 exists in cache, skipping pull
	I0203 14:07:58.468429    2612 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 as a tarball
	I0203 14:07:58.546274    2612 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15770-1719/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-381000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.72s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.72s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-381000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (13.66s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-317000 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-317000 --force --alsologtostderr --driver=docker : (12.558245882s)
helpers_test.go:175: Cleaning up "download-docker-317000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-317000
--- PASS: TestDownloadOnlyKic (13.66s)

                                                
                                    
x
+
TestBinaryMirror (1.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-989000 --alsologtostderr --binary-mirror http://127.0.0.1:49488 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-989000 --alsologtostderr --binary-mirror http://127.0.0.1:49488 --driver=docker : (1.033587464s)
helpers_test.go:175: Cleaning up "binary-mirror-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-989000
--- PASS: TestBinaryMirror (1.66s)

                                                
                                    
x
+
TestOffline (52.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-804000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-804000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (49.419394258s)
helpers_test.go:175: Cleaning up "offline-docker-804000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-804000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-804000: (2.704395867s)
--- PASS: TestOffline (52.12s)

                                                
                                    
x
+
TestAddons/Setup (151.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-379000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-379000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m31.206184833s)
--- PASS: TestAddons/Setup (151.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.456668ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-6b8kp" [d8c82a2b-24f4-495b-ab51-f7d5985af894] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009463005s
addons_test.go:380: (dbg) Run:  kubectl --context addons-379000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-379000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.925764ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-q8dqj" [5e3235fa-4c11-4708-b7dd-5dcec56f895d] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011636083s
addons_test.go:438: (dbg) Run:  kubectl --context addons-379000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-379000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.500177545s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-379000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.487711ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-379000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-379000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-379000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [21f70f1d-f762-4762-a1c5-dba5d488bfc9] Pending
helpers_test.go:344: "task-pv-pod" [21f70f1d-f762-4762-a1c5-dba5d488bfc9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [21f70f1d-f762-4762-a1c5-dba5d488bfc9] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00826471s
addons_test.go:549: (dbg) Run:  kubectl --context addons-379000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-379000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-379000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-379000 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-379000 delete pod task-pv-pod: (1.103062914s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-379000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-379000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-379000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-379000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d01b87de-b6c9-4458-a6e3-77f4b37d6a6c] Pending
helpers_test.go:344: "task-pv-pod-restore" [d01b87de-b6c9-4458-a6e3-77f4b37d6a6c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d01b87de-b6c9-4458-a6e3-77f4b37d6a6c] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.010650777s
addons_test.go:591: (dbg) Run:  kubectl --context addons-379000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-379000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-379000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-379000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-379000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.041616197s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-379000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (37.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-379000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-379000 --alsologtostderr -v=1: (2.391816286s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-z77m7" [38b43978-7fca-45a3-a5e1-a49ac4b93f47] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-z77m7" [38b43978-7fca-45a3-a5e1-a49ac4b93f47] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.008088267s
--- PASS: TestAddons/parallel/Headlamp (11.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-spbct" [06521ae5-7bee-45b0-a267-1d85e2c4a238] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008239072s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-379000
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-379000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-379000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-379000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-379000: (11.03254653s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-379000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-379000
--- PASS: TestAddons/StoppedEnableDisable (11.48s)

                                                
                                    
x
+
TestCertOptions (43.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-792000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-792000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (39.772725644s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-792000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-792000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-792000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-792000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-792000: (2.675869329s)
--- PASS: TestCertOptions (43.33s)

                                                
                                    
x
+
TestCertExpiration (234.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-895000 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-895000 --memory=2048 --cert-expiration=3m --driver=docker : (33.452885752s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-895000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0203 14:47:16.620513    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-895000 --memory=2048 --cert-expiration=8760h --driver=docker : (18.586579707s)
helpers_test.go:175: Cleaning up "cert-expiration-895000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-895000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-895000: (2.656447539s)
--- PASS: TestCertExpiration (234.70s)

                                                
                                    
x
+
TestDockerFlags (38.79s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-597000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-597000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (35.144937083s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-597000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-597000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-597000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-597000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-597000: (2.752388603s)
--- PASS: TestDockerFlags (38.79s)

                                                
                                    
x
+
TestForceSystemdFlag (40.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-989000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-989000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (37.54699513s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-989000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-989000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-989000: (2.617322344s)
--- PASS: TestForceSystemdFlag (40.66s)

                                                
                                    
x
+
TestForceSystemdEnv (37.01s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-317000 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-317000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (33.939315332s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-317000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-317000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-317000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-317000: (2.569409882s)
--- PASS: TestForceSystemdEnv (37.01s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.25s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.25s)

                                                
                                    
x
+
TestErrorSpam/setup (34.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-542000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-542000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 --driver=docker : (34.442079294s)
--- PASS: TestErrorSpam/setup (34.44s)

                                                
                                    
x
+
TestErrorSpam/start (2.47s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 start --dry-run
--- PASS: TestErrorSpam/start (2.47s)

                                                
                                    
x
+
TestErrorSpam/status (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 status
--- PASS: TestErrorSpam/status (1.25s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (11.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 stop: (10.910925726s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-542000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-542000 stop
--- PASS: TestErrorSpam/stop (11.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15770-1719/.minikube/files/etc/test/nested/copy/2568/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-270000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-270000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (47.428519648s)
--- PASS: TestFunctional/serial/StartWithProxy (47.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-270000 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-270000 --alsologtostderr -v=8: (44.699330598s)
functional_test.go:656: soft start took 44.699961478s for "functional-270000" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-270000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 cache add k8s.gcr.io/pause:3.1: (2.414442112s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 cache add k8s.gcr.io/pause:3.3: (2.509849807s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 cache add k8s.gcr.io/pause:latest: (2.214418729s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-270000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4213539909/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 cache add minikube-local-cache-test:functional-270000
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 cache add minikube-local-cache-test:functional-270000: (1.122620821s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 cache delete minikube-local-cache-test:functional-270000
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-270000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-270000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (396.710052ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 cache reload: (1.394973579s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 kubectl -- --context functional-270000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-270000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-270000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0203 14:15:52.991725    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:52.997634    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:53.007803    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:53.027876    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:53.068037    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:53.148352    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:53.309018    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:53.629399    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:54.271653    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:15:55.552316    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-270000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.285434765s)
functional_test.go:754: restart took 43.28559847s for "functional-270000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-270000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 logs
E0203 14:15:58.114706    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 logs: (3.176276748s)
--- PASS: TestFunctional/serial/LogsCmd (3.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd3921560043/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd3921560043/001/logs.txt: (3.277895547s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-270000 config get cpus: exit status 14 (58.696371ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 config get cpus
E0203 14:16:03.236937    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-270000 config get cpus: exit status 14 (59.812976ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-270000 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-270000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 5258: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.77s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-270000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-270000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (651.184904ms)

                                                
                                                
-- stdout --
	* [functional-270000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 14:17:11.702068    5169 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:17:11.702224    5169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:17:11.702230    5169 out.go:309] Setting ErrFile to fd 2...
	I0203 14:17:11.702234    5169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:17:11.702358    5169 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:17:11.702830    5169 out.go:303] Setting JSON to false
	I0203 14:17:11.721104    5169 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1006,"bootTime":1675461625,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 14:17:11.721191    5169 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 14:17:11.743428    5169 out.go:177] * [functional-270000] minikube v1.29.0 on Darwin 13.2
	I0203 14:17:11.786014    5169 notify.go:220] Checking for updates...
	I0203 14:17:11.807962    5169 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 14:17:11.828948    5169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 14:17:11.849919    5169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 14:17:11.873217    5169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 14:17:11.894245    5169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 14:17:11.915996    5169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 14:17:11.937713    5169 config.go:180] Loaded profile config "functional-270000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 14:17:11.938412    5169 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 14:17:11.999532    5169 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 14:17:11.999676    5169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:17:12.142090    5169 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 22:17:12.048581728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:17:12.163631    5169 out.go:177] * Using the docker driver based on existing profile
	I0203 14:17:12.185412    5169 start.go:296] selected driver: docker
	I0203 14:17:12.185448    5169 start.go:857] validating driver "docker" against &{Name:functional-270000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-270000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:17:12.185602    5169 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 14:17:12.210486    5169 out.go:177] 
	W0203 14:17:12.231761    5169 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0203 14:17:12.253318    5169 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-270000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-270000 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-270000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (844.512754ms)

                                                
                                                
-- stdout --
	* [functional-270000] minikube v1.29.0 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 14:17:12.626149    5189 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:17:12.626362    5189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:17:12.626369    5189 out.go:309] Setting ErrFile to fd 2...
	I0203 14:17:12.626375    5189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:17:12.626549    5189 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:17:12.627169    5189 out.go:303] Setting JSON to false
	I0203 14:17:12.646919    5189 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":1007,"bootTime":1675461625,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0203 14:17:12.647020    5189 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0203 14:17:12.668055    5189 out.go:177] * [functional-270000] minikube v1.29.0 sur Darwin 13.2
	I0203 14:17:12.705409    5189 notify.go:220] Checking for updates...
	I0203 14:17:12.727117    5189 out.go:177]   - MINIKUBE_LOCATION=15770
	I0203 14:17:12.769066    5189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	I0203 14:17:12.843094    5189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0203 14:17:12.885245    5189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0203 14:17:12.927128    5189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	I0203 14:17:12.948358    5189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0203 14:17:12.969363    5189 config.go:180] Loaded profile config "functional-270000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 14:17:12.969715    5189 driver.go:365] Setting default libvirt URI to qemu:///system
	I0203 14:17:13.072597    5189 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0203 14:17:13.072736    5189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0203 14:17:13.257763    5189 info.go:266] docker info: {ID:GSNP:GK6O:NBBA:CS3H:B4YR:6KQI:MMNQ:OHLJ:PBZ2:MCN2:S4BS:ZXUA Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-03 22:17:13.123118268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0203 14:17:13.299572    5189 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0203 14:17:13.320810    5189 start.go:296] selected driver: docker
	I0203 14:17:13.320838    5189 start.go:857] validating driver "docker" against &{Name:functional-270000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1675280603-15763@sha256:9f474b7ba8542a6ea1d4410955102c8c63c61d74579375db5b45bbc427946de8 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-270000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0203 14:17:13.320954    5189 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0203 14:17:13.345663    5189 out.go:177] 
	W0203 14:17:13.366628    5189 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0203 14:17:13.387925    5189 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (19.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-270000 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-270000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-49gj7" [ddec24fa-57d0-4e8c-846b-62d80c1b648d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-49gj7" [ddec24fa-57d0-4e8c-846b-62d80c1b648d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 13.026473253s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 service list
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 service --namespace=default --https --url hello-node
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 service --namespace=default --https --url hello-node: (2.028103781s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50405
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 service hello-node --url --format={{.IP}}: (2.02659483s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 service hello-node --url
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 service hello-node --url: (2.028895749s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50419
--- PASS: TestFunctional/parallel/ServiceCmd (19.96s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [463d9fc2-ae7d-48b6-85e0-33fe250cf667] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010622721s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-270000 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-270000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-270000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-270000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4de885e7-61a7-4584-8058-cb6382720ad9] Pending
helpers_test.go:344: "sp-pod" [4de885e7-61a7-4584-8058-cb6382720ad9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [4de885e7-61a7-4584-8058-cb6382720ad9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.010114605s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-270000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-270000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-270000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c150f18b-150a-4747-842e-e39eb1a02878] Pending
helpers_test.go:344: "sp-pod" [c150f18b-150a-4747-842e-e39eb1a02878] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [c150f18b-150a-4747-842e-e39eb1a02878] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.009579672s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-270000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh -n functional-270000 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 cp functional-270000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2864610260/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh -n functional-270000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-270000 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-6x5jk" [6e93a1d9-117b-444e-a95f-958f96a3cb89] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-6x5jk" [6e93a1d9-117b-444e-a95f-958f96a3cb89] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.061893563s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-270000 exec mysql-888f84dd9-6x5jk -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-270000 exec mysql-888f84dd9-6x5jk -- mysql -ppassword -e "show databases;": exit status 1 (232.003723ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-270000 exec mysql-888f84dd9-6x5jk -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-270000 exec mysql-888f84dd9-6x5jk -- mysql -ppassword -e "show databases;": exit status 1 (171.965593ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0203 14:16:33.959390    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
functional_test.go:1734: (dbg) Run:  kubectl --context functional-270000 exec mysql-888f84dd9-6x5jk -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-270000 exec mysql-888f84dd9-6x5jk -- mysql -ppassword -e "show databases;": exit status 1 (122.582272ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-270000 exec mysql-888f84dd9-6x5jk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/2568/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo cat /etc/test/nested/copy/2568/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/2568.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo cat /etc/ssl/certs/2568.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/2568.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo cat /usr/share/ca-certificates/2568.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/25682.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo cat /etc/ssl/certs/25682.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/25682.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo cat /usr/share/ca-certificates/25682.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-270000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-270000 ssh "sudo systemctl is-active crio": exit status 1 (609.983213ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-270000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-270000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-270000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-270000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-270000 | a58975765b21e | 30B    |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-270000 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/localhost/my-image                | functional-270000 | 0b3b83a10cde4 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
2023/02/03 14:17:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-270000 image ls --format json:
[{"id":"0b3b83a10cde41a1db763a13202947a2ae001e9915df9eab34d57cf1a7523ce2","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-270000"],"size":"1240000"},{"id":"a58975765b21e932a11d6df318c03a174644dc267ace2d95da483baaa5e4ac75","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-270000"],"size":"30"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-270000"],"size":"32900000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"fce3269
61ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"re
poTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"51
85b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-270000 image ls --format yaml:
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: a58975765b21e932a11d6df318c03a174644dc267ace2d95da483baaa5e4ac75
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-270000
size: "30"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-270000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-270000 ssh pgrep buildkitd: exit status 1 (382.574415ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image build -t localhost/my-image:functional-270000 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 image build -t localhost/my-image:functional-270000 testdata/build: (2.616669378s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-270000 image build -t localhost/my-image:functional-270000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in ce5ecfd757d6
Removing intermediate container ce5ecfd757d6
---> d362055c6e78
Step 3/3 : ADD content.txt /
---> 0b3b83a10cde
Successfully built 0b3b83a10cde
Successfully tagged localhost/my-image:functional-270000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.286984603s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-270000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-270000 docker-env) && out/minikube-darwin-amd64 status -p functional-270000"
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-270000 docker-env) && out/minikube-darwin-amd64 status -p functional-270000": (1.157431638s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-270000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image load --daemon gcr.io/google-containers/addon-resizer:functional-270000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 image load --daemon gcr.io/google-containers/addon-resizer:functional-270000: (3.382861444s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 update-context --alsologtostderr -v=2
E0203 14:17:14.921265    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image load --daemon gcr.io/google-containers/addon-resizer:functional-270000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 image load --daemon gcr.io/google-containers/addon-resizer:functional-270000: (2.233294588s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0203 14:16:13.477660    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.081944335s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-270000
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image load --daemon gcr.io/google-containers/addon-resizer:functional-270000
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 image load --daemon gcr.io/google-containers/addon-resizer:functional-270000: (4.207637105s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image save gcr.io/google-containers/addon-resizer:functional-270000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 image save gcr.io/google-containers/addon-resizer:functional-270000 /Users/jenkins/workspace/addon-resizer-save.tar: (2.132159077s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image rm gcr.io/google-containers/addon-resizer:functional-270000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.680283854s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-270000
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 image save --daemon gcr.io/google-containers/addon-resizer:functional-270000
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-270000 image save --daemon gcr.io/google-containers/addon-resizer:functional-270000: (2.46877733s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-270000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-270000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-270000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [55202121-67d4-4d57-a435-aab9774e124e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [55202121-67d4-4d57-a435-aab9774e124e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.009485115s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-270000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-270000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 4835: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "420.313281ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "81.620175ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "421.870585ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "84.681488ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-270000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2635443542/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1675462620473709000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2635443542/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1675462620473709000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2635443542/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1675462620473709000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2635443542/001/test-1675462620473709000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-270000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (400.831024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  3 22:17 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  3 22:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  3 22:17 test-1675462620473709000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh cat /mount-9p/test-1675462620473709000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-270000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9009cdae-b6e6-4135-835a-ca04209cda04] Pending
helpers_test.go:344: "busybox-mount" [9009cdae-b6e6-4135-835a-ca04209cda04] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [9009cdae-b6e6-4135-835a-ca04209cda04] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [9009cdae-b6e6-4135-835a-ca04209cda04] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.011471499s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-270000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-270000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port2635443542/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.72s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-270000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2181449502/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-270000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.077202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-270000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2181449502/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-270000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-270000 ssh "sudo umount -f /mount-9p": exit status 1 (382.366512ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-270000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-270000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2181449502/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.45s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-270000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-270000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-270000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.2s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-159000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-159000: (2.200921618s)
--- PASS: TestImageBuild/serial/NormalBuild (2.20s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-159000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-159000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-159000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-170000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0203 14:25:53.015991    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:26:10.669895    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-170000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (44.615957083s)
--- PASS: TestJSONOutput/start/Command (44.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-170000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-170000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-170000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-170000 --output=json --user=testUser: (10.839042034s)
--- PASS: TestJSONOutput/stop/Command (10.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-739000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-739000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (359.182531ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33afb0d8-0e5f-4749-846d-94d25da58594","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-739000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eca6ded1-eef6-4099-842e-6f95127fd3b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15770"}}
	{"specversion":"1.0","id":"0dbcbc4d-fa9e-496e-bd8d-791dca7ca2e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig"}}
	{"specversion":"1.0","id":"d2069665-57da-41c1-bf0b-dfecf7cec945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"c528e291-da1a-482c-ac10-adb2a371a0c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4ff28052-063e-4ff0-975c-6ebe9939a6c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube"}}
	{"specversion":"1.0","id":"e5e5e9e7-2e17-4076-ada7-630e85ab15e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"65998f66-3b5b-4fea-a8f3-7ca8f15ed905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-739000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-739000
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-196000 --network=
E0203 14:26:38.371099    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-196000 --network=: (28.638301794s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-196000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-196000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-196000: (2.60021024s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-445000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-445000 --network=bridge: (29.3421863s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-445000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-445000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-445000: (2.441929169s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.84s)

                                                
                                    
x
+
TestKicExistingNetwork (31.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-391000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-391000 --network=existing-network: (29.073327829s)
helpers_test.go:175: Cleaning up "existing-network-391000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-391000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-391000: (2.406530468s)
--- PASS: TestKicExistingNetwork (31.84s)

                                                
                                    
x
+
TestKicCustomSubnet (32.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-148000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-148000 --subnet=192.168.60.0/24: (29.879440042s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-148000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-148000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-148000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-148000: (2.656559207s)
--- PASS: TestKicCustomSubnet (32.59s)

                                                
                                    
x
+
TestKicStaticIP (35.06s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-469000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-469000 --static-ip=192.168.200.200: (32.253759603s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-469000 ip
helpers_test.go:175: Cleaning up "static-ip-469000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-469000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-469000: (2.566363832s)
--- PASS: TestKicStaticIP (35.06s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (69.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-793000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-793000 --driver=docker : (33.034004497s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-796000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-796000 --driver=docker : (29.798176427s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-793000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-796000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-796000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-796000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-796000: (2.601286957s)
helpers_test.go:175: Cleaning up "first-793000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-793000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-793000: (2.581185529s)
--- PASS: TestMinikubeProfile (69.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-759000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-759000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.031576216s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-759000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-771000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-771000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.038019795s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-771000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-759000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-759000 --alsologtostderr -v=5: (2.139960121s)
--- PASS: TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-771000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-771000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-771000: (1.58942618s)
--- PASS: TestMountStart/serial/Stop (1.59s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-771000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-771000: (4.897048913s)
--- PASS: TestMountStart/serial/RestartStopped (5.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-771000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-496000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0203 14:30:53.023893    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:31:10.677871    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-496000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m19.116529964s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-496000 -- rollout status deployment/busybox: (5.605900161s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-b7rw7 -- nslookup kubernetes.io
E0203 14:32:16.078144    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-dglsf -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-b7rw7 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-dglsf -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-b7rw7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-dglsf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-b7rw7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-b7rw7 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-dglsf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-496000 -- exec busybox-6b86dd6d48-dglsf -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-496000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-496000 -v 3 --alsologtostderr: (21.553733097s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr: (1.148643454s)
--- PASS: TestMultiNode/serial/AddNode (22.70s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp testdata/cp-test.txt multinode-496000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3215986764/001/cp-test_multinode-496000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000:/home/docker/cp-test.txt multinode-496000-m02:/home/docker/cp-test_multinode-496000_multinode-496000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m02 "sudo cat /home/docker/cp-test_multinode-496000_multinode-496000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000:/home/docker/cp-test.txt multinode-496000-m03:/home/docker/cp-test_multinode-496000_multinode-496000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m03 "sudo cat /home/docker/cp-test_multinode-496000_multinode-496000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp testdata/cp-test.txt multinode-496000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3215986764/001/cp-test_multinode-496000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000-m02:/home/docker/cp-test.txt multinode-496000:/home/docker/cp-test_multinode-496000-m02_multinode-496000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000 "sudo cat /home/docker/cp-test_multinode-496000-m02_multinode-496000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000-m02:/home/docker/cp-test.txt multinode-496000-m03:/home/docker/cp-test_multinode-496000-m02_multinode-496000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m03 "sudo cat /home/docker/cp-test_multinode-496000-m02_multinode-496000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp testdata/cp-test.txt multinode-496000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile3215986764/001/cp-test_multinode-496000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000-m03:/home/docker/cp-test.txt multinode-496000:/home/docker/cp-test_multinode-496000-m03_multinode-496000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000 "sudo cat /home/docker/cp-test_multinode-496000-m03_multinode-496000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 cp multinode-496000-m03:/home/docker/cp-test.txt multinode-496000-m02:/home/docker/cp-test_multinode-496000-m03_multinode-496000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 ssh -n multinode-496000-m02 "sudo cat /home/docker/cp-test_multinode-496000-m03_multinode-496000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-496000 node stop m03: (1.564662887s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-496000 status: exit status 7 (753.298292ms)

                                                
                                                
-- stdout --
	multinode-496000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-496000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-496000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr: exit status 7 (751.337301ms)

                                                
                                                
-- stdout --
	multinode-496000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-496000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-496000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 14:32:58.426770    8910 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:32:58.426933    8910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:32:58.426939    8910 out.go:309] Setting ErrFile to fd 2...
	I0203 14:32:58.426943    8910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:32:58.427060    8910 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:32:58.427236    8910 out.go:303] Setting JSON to false
	I0203 14:32:58.427257    8910 mustload.go:65] Loading cluster: multinode-496000
	I0203 14:32:58.427300    8910 notify.go:220] Checking for updates...
	I0203 14:32:58.427535    8910 config.go:180] Loaded profile config "multinode-496000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 14:32:58.427547    8910 status.go:255] checking status of multinode-496000 ...
	I0203 14:32:58.427942    8910 cli_runner.go:164] Run: docker container inspect multinode-496000 --format={{.State.Status}}
	I0203 14:32:58.484162    8910 status.go:330] multinode-496000 host status = "Running" (err=<nil>)
	I0203 14:32:58.484190    8910 host.go:66] Checking if "multinode-496000" exists ...
	I0203 14:32:58.484412    8910 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-496000
	I0203 14:32:58.541572    8910 host.go:66] Checking if "multinode-496000" exists ...
	I0203 14:32:58.541852    8910 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 14:32:58.541920    8910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-496000
	I0203 14:32:58.599305    8910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51376 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/multinode-496000/id_rsa Username:docker}
	I0203 14:32:58.688334    8910 ssh_runner.go:195] Run: systemctl --version
	I0203 14:32:58.692881    8910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 14:32:58.702313    8910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-496000
	I0203 14:32:58.759618    8910 kubeconfig.go:92] found "multinode-496000" server: "https://127.0.0.1:51380"
	I0203 14:32:58.759643    8910 api_server.go:165] Checking apiserver status ...
	I0203 14:32:58.759684    8910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0203 14:32:58.769674    8910 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1895/cgroup
	W0203 14:32:58.777793    8910 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1895/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0203 14:32:58.777861    8910 ssh_runner.go:195] Run: ls
	I0203 14:32:58.781832    8910 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51380/healthz ...
	I0203 14:32:58.786308    8910 api_server.go:278] https://127.0.0.1:51380/healthz returned 200:
	ok
	I0203 14:32:58.786320    8910 status.go:421] multinode-496000 apiserver status = Running (err=<nil>)
	I0203 14:32:58.786333    8910 status.go:257] multinode-496000 status: &{Name:multinode-496000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 14:32:58.786344    8910 status.go:255] checking status of multinode-496000-m02 ...
	I0203 14:32:58.786574    8910 cli_runner.go:164] Run: docker container inspect multinode-496000-m02 --format={{.State.Status}}
	I0203 14:32:58.845641    8910 status.go:330] multinode-496000-m02 host status = "Running" (err=<nil>)
	I0203 14:32:58.845663    8910 host.go:66] Checking if "multinode-496000-m02" exists ...
	I0203 14:32:58.845921    8910 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-496000-m02
	I0203 14:32:58.903662    8910 host.go:66] Checking if "multinode-496000-m02" exists ...
	I0203 14:32:58.903935    8910 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0203 14:32:58.904033    8910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-496000-m02
	I0203 14:32:58.962016    8910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51452 SSHKeyPath:/Users/jenkins/minikube-integration/15770-1719/.minikube/machines/multinode-496000-m02/id_rsa Username:docker}
	I0203 14:32:59.050942    8910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0203 14:32:59.060202    8910 status.go:257] multinode-496000-m02 status: &{Name:multinode-496000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0203 14:32:59.060230    8910 status.go:255] checking status of multinode-496000-m03 ...
	I0203 14:32:59.060498    8910 cli_runner.go:164] Run: docker container inspect multinode-496000-m03 --format={{.State.Status}}
	I0203 14:32:59.120717    8910 status.go:330] multinode-496000-m03 host status = "Stopped" (err=<nil>)
	I0203 14:32:59.120744    8910 status.go:343] host is not running, skipping remaining checks
	I0203 14:32:59.120754    8910 status.go:257] multinode-496000-m03 status: &{Name:multinode-496000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-496000 node start m03 --alsologtostderr: (9.108037391s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-496000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-496000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-496000: (23.018139554s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-496000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-496000 --wait=true -v=8 --alsologtostderr: (1m30.851962934s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-496000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-496000 node delete m03: (5.23111627s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-496000 stop: (21.68425547s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-496000 status: exit status 7 (169.604153ms)

                                                
                                                
-- stdout --
	multinode-496000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-496000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr: exit status 7 (168.731547ms)

                                                
                                                
-- stdout --
	multinode-496000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-496000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0203 14:35:31.328236    9494 out.go:296] Setting OutFile to fd 1 ...
	I0203 14:35:31.328402    9494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:35:31.328407    9494 out.go:309] Setting ErrFile to fd 2...
	I0203 14:35:31.328411    9494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0203 14:35:31.328527    9494 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15770-1719/.minikube/bin
	I0203 14:35:31.328697    9494 out.go:303] Setting JSON to false
	I0203 14:35:31.328719    9494 mustload.go:65] Loading cluster: multinode-496000
	I0203 14:35:31.328756    9494 notify.go:220] Checking for updates...
	I0203 14:35:31.328998    9494 config.go:180] Loaded profile config "multinode-496000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0203 14:35:31.329011    9494 status.go:255] checking status of multinode-496000 ...
	I0203 14:35:31.329375    9494 cli_runner.go:164] Run: docker container inspect multinode-496000 --format={{.State.Status}}
	I0203 14:35:31.384154    9494 status.go:330] multinode-496000 host status = "Stopped" (err=<nil>)
	I0203 14:35:31.384173    9494 status.go:343] host is not running, skipping remaining checks
	I0203 14:35:31.384179    9494 status.go:257] multinode-496000 status: &{Name:multinode-496000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0203 14:35:31.384198    9494 status.go:255] checking status of multinode-496000-m02 ...
	I0203 14:35:31.384439    9494 cli_runner.go:164] Run: docker container inspect multinode-496000-m02 --format={{.State.Status}}
	I0203 14:35:31.440820    9494 status.go:330] multinode-496000-m02 host status = "Stopped" (err=<nil>)
	I0203 14:35:31.440846    9494 status.go:343] host is not running, skipping remaining checks
	I0203 14:35:31.440854    9494 status.go:257] multinode-496000-m02 status: &{Name:multinode-496000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-496000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0203 14:35:53.031709    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:36:10.685691    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-496000 --wait=true -v=8 --alsologtostderr --driver=docker : (49.426096427s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-496000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-496000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-496000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-496000-m02 --driver=docker : exit status 14 (705.017435ms)

                                                
                                                
-- stdout --
	* [multinode-496000-m02] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-496000-m02' is duplicated with machine name 'multinode-496000-m02' in profile 'multinode-496000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-496000-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-496000-m03 --driver=docker : (29.895700475s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-496000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-496000: exit status 80 (482.523232ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-496000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-496000-m03 already exists in multinode-496000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-496000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-496000-m03: (2.597702023s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.74s)

                                                
                                    
x
+
TestPreload (122.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-382000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0203 14:37:33.749202    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-382000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m0.226345268s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-382000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-382000 -- docker pull gcr.io/k8s-minikube/busybox: (2.114466566s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-382000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-382000: (10.844491196s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-382000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-382000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (46.28305614s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-382000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-382000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-382000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-382000: (2.678734972s)
--- PASS: TestPreload (122.57s)

                                                
                                    
x
+
TestScheduledStopUnix (103.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-975000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-975000 --memory=2048 --driver=docker : (29.371339951s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-975000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-975000 -n scheduled-stop-975000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-975000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-975000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-975000 -n scheduled-stop-975000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-975000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-975000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-975000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-975000: exit status 7 (115.965278ms)

                                                
                                                
-- stdout --
	scheduled-stop-975000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-975000 -n scheduled-stop-975000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-975000 -n scheduled-stop-975000: exit status 7 (112.592897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-975000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-975000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-975000: (2.305182246s)
--- PASS: TestScheduledStopUnix (103.80s)

                                                
                                    
x
+
TestSkaffold (61.91s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3756405888 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-244000 --memory=2600 --driver=docker 
E0203 14:40:53.023368    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
E0203 14:41:10.677561    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-244000 --memory=2600 --driver=docker : (30.276478325s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3756405888 run --minikube-profile skaffold-244000 --kube-context skaffold-244000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3756405888 run --minikube-profile skaffold-244000 --kube-context skaffold-244000 --status-check=true --port-forward=false --interactive=false: (16.999655983s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-78dbf58f9b-czbht" [78cd72ac-8818-4b35-b83a-5a432651e102] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012873281s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-95cc69b9-x2nsr" [979ed145-c8c6-4582-bdf8-712027ae103c] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006594394s
helpers_test.go:175: Cleaning up "skaffold-244000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-244000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-244000: (2.861086706s)
--- PASS: TestSkaffold (61.91s)

                                                
                                    
x
+
TestInsufficientStorage (14.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-440000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-440000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.245523715s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c86b024e-f15b-4726-a966-aa667f794b1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-440000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"82527f7c-a382-4501-80a2-c188d09d702c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15770"}}
	{"specversion":"1.0","id":"9c4c9b87-d01d-4ab3-b4a3-cb14911dd23c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig"}}
	{"specversion":"1.0","id":"03f43b5e-4a11-4060-a66e-f48731b2b9d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a4b6ff81-be97-48d3-a7e0-f1da2b6225c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"973e8337-1326-4408-8adc-e288d1a763bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube"}}
	{"specversion":"1.0","id":"a7ce7020-a5a1-4d05-85f0-ba3c3f6e43b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2295a204-5503-4deb-b1d2-73b4eb0c6116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a1b081d1-d187-4d77-b932-0d240d84da12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"22288923-d8e9-41ea-b979-b3b03c1e4a13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"90ff43ae-a429-4cde-92fb-aade3ce6d3b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"ab0df413-ee43-47e3-9d96-020dc29ab016","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-440000 in cluster insufficient-storage-440000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b53dbdfe-2378-42d5-8fe9-ce9658dc9baf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d9cb8f8-ae3d-4bfc-a797-16cfdb2f25d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"37fdb746-0ba5-4ab4-802f-d370692baca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-440000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-440000 --output=json --layout=cluster: exit status 7 (392.84963ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-440000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-440000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 14:42:00.161708   11271 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-440000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-440000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-440000 --output=json --layout=cluster: exit status 7 (398.582757ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-440000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-440000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0203 14:42:00.561193   11285 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-440000" does not appear in /Users/jenkins/minikube-integration/15770-1719/kubeconfig
	E0203 14:42:00.570058   11285 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/insufficient-storage-440000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-440000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-440000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-440000: (2.406787785s)
--- PASS: TestInsufficientStorage (14.44s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.24s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15770
- KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3148255457/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3148255457/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3148255457/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3148255457/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.24s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.26s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15770
- KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2117586299/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2117586299/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2117586299/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2117586299/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-915000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-915000: (3.534204993s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.53s)

                                                
                                    
x
+
TestPause/serial/Start (46.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-159000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0203 14:48:56.086561    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/addons-379000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-159000 --memory=2048 --install-addons=false --wait=all --driver=docker : (46.027622362s)
--- PASS: TestPause/serial/Start (46.03s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-159000 --alsologtostderr -v=1 --driver=docker 
E0203 14:49:19.505726    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-159000 --alsologtostderr -v=1 --driver=docker : (43.982300818s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.00s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-159000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-159000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-159000 --output=json --layout=cluster: exit status 2 (415.450046ms)

                                                
                                                
-- stdout --
	{"Name":"pause-159000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-159000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-159000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-159000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-159000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-159000 --alsologtostderr -v=5: (2.630137646s)
--- PASS: TestPause/serial/DeletePaused (2.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-159000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-159000: exit status 1 (54.093062ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-159000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-729000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-729000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (378.033192ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-729000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15770
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15770-1719/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15770-1719/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-729000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-729000 --driver=docker : (30.501734333s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-729000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-729000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-729000 --no-kubernetes --driver=docker : (5.890349119s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-729000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-729000 status -o json: exit status 2 (404.107995ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-729000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-729000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-729000: (2.427881857s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-729000 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-729000 --no-kubernetes --driver=docker : (6.966625094s)
--- PASS: TestNoKubernetes/serial/Start (6.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-729000 "sudo systemctl is-active --quiet service kubelet"

                                                
                                                
=== CONT  TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-729000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (440.162575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (1.215839102s)
--- PASS: TestNoKubernetes/serial/ProfileList (1.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-729000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-729000: (1.590140404s)
--- PASS: TestNoKubernetes/serial/Stop (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-729000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-729000 --driver=docker : (5.786295409s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-729000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-729000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (486.040247ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E0203 14:51:10.690107    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory
E0203 14:51:35.662769    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (52.822435505s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-292000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-ph64t" [55a9a6b6-1ff1-4008-a871-2700468de0a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0203 14:52:03.350498    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-ph64t" [55a9a6b6-1ff1-4008-a871-2700468de0a9] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.005954071s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (51.863901022s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-k9l8q" [901977fe-70b6-4312-bc44-00e11867f23d] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.017796586s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-292000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-456fn" [0d7ee5f8-e34c-402e-8758-6190ccc761b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-456fn" [0d7ee5f8-e34c-402e-8758-6190ccc761b7] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.007135295s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (58.593195193s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mqqbq" [1077d189-0f79-426a-9bd8-9a6b5df9fb64] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.016401146s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-292000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dmsrs" [fc1f47e3-5bf9-4411-9058-2b3b44e7e2d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-dmsrs" [fc1f47e3-5bf9-4411-9058-2b3b44e7e2d3] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.008421526s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
E0203 14:56:10.696461    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/functional-270000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (51.827195223s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (47.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0203 14:56:35.668100    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (47.15110293s)
--- PASS: TestNetworkPlugins/group/bridge/Start (47.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-292000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-454qm" [eb6f8bc0-d9eb-4655-b526-f66b95904716] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0203 14:56:59.974702    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:56:59.979819    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:56:59.990161    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:57:00.010277    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:57:00.052316    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:57:00.132427    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:57:00.293139    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:57:00.613634    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:57:01.254071    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-454qm" [eb6f8bc0-d9eb-4655-b526-f66b95904716] Running
E0203 14:57:02.535551    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:57:05.136263    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 16.007370179s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-292000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-jz9mq" [19128447-e358-4229-b025-0c149f0bb121] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0203 14:57:20.497597    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-jz9mq" [19128447-e358-4229-b025-0c149f0bb121] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.006692954s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (45.556773002s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (1m2.766640112s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-292000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (15.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-hdp7t" [2bc17ab3-e668-4829-a83e-882a815016e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0203 14:58:21.940802    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-hdp7t" [2bc17ab3-e668-4829-a83e-882a815016e6] Running
E0203 14:58:30.603915    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:30.608977    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:30.619041    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:30.641185    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:30.681323    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:30.761539    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:30.921720    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:31.242152    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:31.882318    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 14:58:33.163062    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 15.007522725s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m15.864336247s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-292000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-c4dw4" [7b71ec54-989f-4bfd-8c61-049d313443a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-c4dw4" [7b71ec54-989f-4bfd-8c61-049d313443a7] Running
E0203 14:59:11.565893    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.006940019s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (55.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
E0203 14:59:43.862738    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 14:59:52.527034    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 15:00:14.256803    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:00:14.261903    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:00:14.272020    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-292000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (55.289169684s)
--- PASS: TestNetworkPlugins/group/false/Start (55.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jj54c" [49bec515-f474-4639-8149-136668fdd150] Running
E0203 15:00:14.292123    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:00:14.332366    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:00:14.412513    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:00:14.572640    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:00:14.892798    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:00:15.533010    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:00:16.813734    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015419513s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-292000 "pgrep -a kubelet"
E0203 15:00:19.374116    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (19.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-th9mc" [c6bc68fd-5712-4188-acee-51783d8cb40b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0203 15:00:24.494384    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-th9mc" [c6bc68fd-5712-4188-acee-51783d8cb40b] Running
E0203 15:00:34.735492    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 19.007148619s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (19.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-292000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (29.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-292000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-w88x8" [d9a86446-04db-4fd0-be28-fdb51740dfa4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:344: "netcat-694fc96674-w88x8" [d9a86446-04db-4fd0-be28-fdb51740dfa4] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 29.008842238s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (29.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-292000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-292000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)
E0203 15:34:01.951683    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:34:07.306834    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:07.311902    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:07.322193    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:07.344256    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:07.384481    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:07.464570    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:07.625445    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:07.946424    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:08.588272    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:09.868421    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:12.429960    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:17.550313    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:27.812559    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-520000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0203 15:01:35.675267    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 15:01:36.178915    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:01:51.811248    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:51.816452    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:51.826654    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:51.846723    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:51.887080    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:51.967185    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:52.127289    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:52.447391    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:53.087553    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:54.367696    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:56.929850    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:01:59.982187    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 15:02:02.050590    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:02:12.291976    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:02:18.111780    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:18.116926    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:18.127037    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:18.148363    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:18.190481    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:18.270670    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:18.430815    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:18.750961    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:19.392196    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:20.672824    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:23.234485    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:02:27.706669    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
E0203 15:02:28.354741    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-520000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (58.719999367s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-520000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b7b837c-1cc3-4c3b-b3e4-c435f2285404] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0203 15:02:32.772680    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2b7b837c-1cc3-4c3b-b3e4-c435f2285404] Running
E0203 15:02:38.595281    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.021215737s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-520000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-520000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-520000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-520000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-520000 --alsologtostderr -v=3: (10.908778009s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-520000 -n no-preload-520000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-520000 -n no-preload-520000: exit status 7 (114.077379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-520000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (555.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-520000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0203 15:02:58.108182    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:02:58.732463    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/skaffold-244000/client.crt: no such file or directory
E0203 15:02:59.083883    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:03:13.742022    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:03:18.543345    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:18.549711    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:18.561805    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:18.583495    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:18.624315    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:18.704922    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:18.865406    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:19.185703    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:19.826175    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:21.108310    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:23.669120    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:28.791704    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:30.617286    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 15:03:39.032354    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:03:40.049614    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:03:58.360543    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
E0203 15:03:59.513222    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:04:01.918054    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:01.924332    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:01.934880    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:01.957015    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:01.997268    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:02.077645    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:02.237971    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:02.558336    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:03.215596    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:04.495958    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:07.056158    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:12.176547    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:22.417619    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:04:35.665511    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
E0203 15:04:40.474704    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
E0203 15:04:42.899028    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:05:01.971995    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
E0203 15:05:14.273948    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:05:14.298891    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:14.305275    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:14.315487    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:14.335661    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:14.377127    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:14.459283    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:14.620065    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:05:14.940271    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-520000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (9m15.015509879s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-520000 -n no-preload-520000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (555.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-136000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-136000 --alsologtostderr -v=3: (1.573038837s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-136000 -n old-k8s-version-136000: exit status 7 (115.260071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-136000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-z6vnv" [f04535a5-8fe2-4749-a11d-268f8406a981] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013643332s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-z6vnv" [f04535a5-8fe2-4749-a11d-268f8406a981] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0071617s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-520000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-520000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-520000 --alsologtostderr -v=1
E0203 15:12:18.132149    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-520000 -n no-preload-520000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-520000 -n no-preload-520000: exit status 2 (417.860826ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-520000 -n no-preload-520000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-520000 -n no-preload-520000: exit status 2 (417.505177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-520000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-520000 -n no-preload-520000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-520000 -n no-preload-520000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-913000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0203 15:12:29.875966    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:29.881075    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:29.891214    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:29.911289    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:29.951433    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:30.031615    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:30.192171    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:30.512314    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:31.152539    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:32.432733    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:34.992891    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:40.113593    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:12:50.355948    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:13:10.837212    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-913000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (53.685561768s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-913000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0dacb81f-185a-4686-9366-bb47de54a0ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0203 15:13:18.558156    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [0dacb81f-185a-4686-9366-bb47de54a0ee] Running
E0203 15:13:23.090874    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/auto-292000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.01354988s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-913000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-913000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-913000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-913000 --alsologtostderr -v=3
E0203 15:13:30.631721    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-913000 --alsologtostderr -v=3: (10.879099859s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-913000 -n embed-certs-913000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-913000 -n embed-certs-913000: exit status 7 (113.808233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-913000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (553.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-913000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0203 15:13:51.800425    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/no-preload-520000/client.crt: no such file or directory
E0203 15:14:01.932069    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/custom-flannel-292000/client.crt: no such file or directory
E0203 15:14:53.735956    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kindnet-292000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-913000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (9m12.643764801s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-913000 -n embed-certs-913000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (553.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-hvp89" [a1df2337-b7e5-4d21-bb24-c07db5944d2c] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014875888s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-hvp89" [a1df2337-b7e5-4d21-bb24-c07db5944d2c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006424914s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-913000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-913000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-913000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-913000 -n embed-certs-913000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-913000 -n embed-certs-913000: exit status 2 (419.711369ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-913000 -n embed-certs-913000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-913000 -n embed-certs-913000: exit status 2 (414.487591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-913000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-913000 -n embed-certs-913000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-913000 -n embed-certs-913000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-893000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0203 15:23:18.572714    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/kubenet-292000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-893000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (55.709488145s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-893000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1fd4b6e8-d60d-4779-9b23-0c520ddeb2b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1fd4b6e8-d60d-4779-9b23-0c520ddeb2b2] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014516689s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-893000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-893000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-893000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-893000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-893000 --alsologtostderr -v=3: (10.972265237s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000: exit status 7 (115.83544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-893000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (556.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-893000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-893000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (9m15.772689019s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (556.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-vfx5j" [a65bf198-2f56-4ed0-8ff5-00bf7aa284ce] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014688982s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-vfx5j" [a65bf198-2f56-4ed0-8ff5-00bf7aa284ce] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007911708s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-893000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-893000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-893000 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000: exit status 2 (538.137599ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000: exit status 2 (502.656388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-893000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-893000 -n default-k8s-diff-port-893000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-405000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-405000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (44.775303699s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-405000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-405000 --alsologtostderr -v=3
E0203 15:34:48.293293    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/default-k8s-diff-port-893000/client.crt: no such file or directory
E0203 15:34:54.901659    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/enable-default-cni-292000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-405000 --alsologtostderr -v=3: (10.910070926s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-405000 -n newest-cni-405000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-405000 -n newest-cni-405000: exit status 7 (113.343955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-405000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-405000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
E0203 15:35:05.952248    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:05.958571    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:05.969369    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:05.991337    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:06.031583    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:06.112485    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:06.272675    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:06.594857    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:07.237133    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:08.519079    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:11.079349    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
E0203 15:35:14.307039    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/flannel-292000/client.crt: no such file or directory
E0203 15:35:14.331327    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/calico-292000/client.crt: no such file or directory
E0203 15:35:16.200666    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/old-k8s-version-136000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-405000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (24.40145237s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-405000 -n newest-cni-405000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-405000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-405000 --alsologtostderr -v=1
E0203 15:35:21.208825    2568 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15770-1719/.minikube/profiles/bridge-292000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-405000 -n newest-cni-405000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-405000 -n newest-cni-405000: exit status 2 (415.009037ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-405000 -n newest-cni-405000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-405000 -n newest-cni-405000: exit status 2 (411.454636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-405000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-405000 -n newest-cni-405000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-405000 -n newest-cni-405000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.22s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 9.435899ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-76tfr" [e338aeff-35a4-4182-9012-29d81158c4c5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008367353s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x2w9t" [a02fec76-00c1-4150-9507-fc757aac3c9b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010058635s
addons_test.go:305: (dbg) Run:  kubectl --context addons-379000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-379000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Done: kubectl --context addons-379000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.597107668s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-379000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-379000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-379000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a913027a-b9f9-48fe-8bec-f7dff810313a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [a913027a-b9f9-48fe-8bec-f7dff810313a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006159203s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-379000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.27s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-270000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-270000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-dj2xq" [e4a462bd-25fd-4515-959f-0d47169d15f8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:344: "hello-node-connect-5cf7cc858f-dj2xq" [e4a462bd-25fd-4515-959f-0d47169d15f8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.042666198s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-292000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-292000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-292000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-292000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-292000"

                                                
                                                
----------------------- debugLogs end: cilium-292000 [took: 6.510627203s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-292000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-292000
--- SKIP: TestNetworkPlugins/group/cilium (7.04s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-350000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-350000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard